All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,   In our environment we are wanting to cut down on some windows event logs. There are quite a few logs that have specific message patterns and script names that we are wanting to discard. A... See more...
Hi All,   In our environment we are wanting to cut down on some windows event logs. There are quite a few logs that have specific message patterns and script names that we are wanting to discard. After researching and scouring the internet for ways to perform regex on a blacklist, I'm having trouble finding an example similar to mine.   In our Windows Event Logs, we are seeing events still come through, even though we are attempting to blacklist them. Here is an example where the ScriptName is C:\WINDOWS\system32\WindowsPowerShell\v1.\powershell.EXE and is not being dropped, I have a feeling this is due to the regex in place and the .* not wanting to pick up whatever is after the initial C:\WINDOWS\   If anyone could shed any light on this, that would be much appreciated!!    Message=Pipeline execution details for command line: $Col = new-object Data.DataColumn . Context Information: DetailSequence=1 DetailTotal=1 SequenceNumber=21 UserId=domain\AccountName$ HostName=ConsoleHost HostVersion=46545HostId=xxx-xxx-xx-xxxxHostApplication=C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.EXE -command C:\WINDOWS\Script\debug-process.ps1 \EngineVersion=RunspaceId=-5e81---PipelineId=1 ScriptName=C:\WINDOWS\Script\debug-process.ps1 CommandLine= $Col = new-object Data.DataColumn Details: CommandInvocation(New-Object): "New-Object" ParameterBinding(New-Object): name="TypeName"; value="Data.DataColumn" 1   blacklist8 = EventCode="800" Message="domain\\AccountName" ScriptName=".+(?:C:\\WINDOWS\\Script\\.*)"
Hello,    I'm trying to develop an App for Splunk. This app contains a Python 3 script which contains some dependencies. One of these dependencies imports pycurl, but when I try to launch the script... See more...
Hello,    I'm trying to develop an App for Splunk. This app contains a Python 3 script which contains some dependencies. One of these dependencies imports pycurl, but when I try to launch the script using this command:        splunk cmd python3 /opt/splunk/etc/apps/wurfl_device_detection_splunk/bin/wurfl_device_detection.py       this is what I get:       Traceback (most recent call last): File "/opt/splunk/etc/apps/wurfl_device_detection_splunk/bin/wurfl_device_detection.py", line 45, in <module> wm_client = get_or_create_wm_client() File "/opt/splunk/etc/apps/wurfl_device_detection_splunk/bin/wurfl_device_detection.py", line 27, in get_or_create_wm_client globals()["wm_client"] = WmClient.create("http", wm_host, wm_port, "") File "/opt/splunk/etc/apps/wurfl_device_detection_splunk/bin/wmclient/wmclient.py", line 77, in create client = WmClient() File "/opt/splunk/etc/apps/wurfl_device_detection_splunk/bin/wmclient/wmclient.py", line 65, in __init__ self.curl_post = pycurl.Curl() AttributeError: module 'pycurl' has no attribute 'Curl'             - libcurl is properly installed - I'm using both pycurl and wmclient dependencies in other python apps outside of Splunk and they work fine. I've copied them into the bin/ directory of my Splunk App to make them visible. - I've copied the pycurl.cpython-38-x86_64-linux-gnu.so file into the <splunk_home>/lib/python3.7/site-packages directory.    But I'm still getting this error message. Am I missing something? Should I put the .so file somewhere else? From the backtrace I'd guess that the pycurl python script is found but it cannot find the libcurl it wraps. Any suggestion?
search query: inutlookup orderStatus.csv | timechart count(value) as Orders | eval P_bypass = round((bypassCandidates/Orders)*100,1) | eval Goal = 50 | table _time Orders P_bypass Goal with the... See more...
search query: inutlookup orderStatus.csv | timechart count(value) as Orders | eval P_bypass = round((bypassCandidates/Orders)*100,1) | eval Goal = 50 | table _time Orders P_bypass Goal with the above search, I make a column chart to represent count of orders over time. I make P_bypass as overlay to show on a different axis (%).  My ask is that, I either need to add 'Goal' as a line chart without the overlay or need to show different colors for orders based on value (order<20-Red, order>21 Green). is any one of that doable?
Hi, Is there a way to rerun an alert until it gets the result of more than 0?
Hi. We've recently started using Alert Manager on Splunk Cloud (8.1.2) and have managed to create dynamic alerts based upon a lookup of impact and urgency.  These calculate the "priority" field i... See more...
Hi. We've recently started using Alert Manager on Splunk Cloud (8.1.2) and have managed to create dynamic alerts based upon a lookup of impact and urgency.  These calculate the "priority" field in Alert Manager, and we're looking for a way to suppress the "informational" level alerts (low/low) Under Settings> Suppression Rules, I figured I could set up a rule that went as such, Rule type = Normal Scope = Rule_name* Field = $priority$ Condition = is Value = informational Yet this does not seem to suppress anything. I have tried with Field = $result.priority$ but that doesnt work either. Any help would be greatly appreciated
Hi all, I made a search where I use a regular expression to extract the username from the email address because we noticed that a lot of phishing mails contain that pattern. The following line is th... See more...
Hi all, I made a search where I use a regular expression to extract the username from the email address because we noticed that a lot of phishing mails contain that pattern. The following line is the expression  | rex field=receiver_email "(?<user>[a-zA-Z]+.[a-zA-Z]+)\@" Now I want to add the field "user" in a search query to very if in the content body of an email there is a URL with that field.  the search line that I tried is  | search content_body="<https://*user*>" Of course this only verifies is the content equals to the string "user" but I don't know how to change it to the field value.  So just as an example if the URL is  A part of the content body https://someurl.com/idontknow/blabla<USER>blabla The rest of the content body I should get a hit because the username is in that URL.    Thank you very much, Sasquatchatmars
Hi all, I'm trying to add a dynamic (preferably interactive) table of contents to a dashboard for the PDF export function, because the exported report is over 20 pages long. So ideally I would be ab... See more...
Hi all, I'm trying to add a dynamic (preferably interactive) table of contents to a dashboard for the PDF export function, because the exported report is over 20 pages long. So ideally I would be able to click on a panel title and it would automatically take me there. I can't find a standard Splunk feature to do this, so I tried to do this with <a> tags in html to redirect to a specific panel id, which works in the dashboard. However, the <a> tags and href are not generated in the PDF, which just shows the plain text. If an interactive table of contents is not possible, a dynamic one which just shows the page numbers for the corresponding panels would also be great. Does anyone have ideas on how to accomplish this? It would help me a lot!
Hi all i would like to ask how we can use a lookup table to whitelist a set of src and dest.   sample traffic src 1.1.1.1 > dest 2.2.2.2 src 3.3.3.3 > dest 4.4.4.4   the traffic between src 1.... See more...
Hi all i would like to ask how we can use a lookup table to whitelist a set of src and dest.   sample traffic src 1.1.1.1 > dest 2.2.2.2 src 3.3.3.3 > dest 4.4.4.4   the traffic between src 1.1.1.1 to 2.2.2.2 should be exclude from the search result.
Hi community, using Splunk for a ~month now and need some help,   If done correctly, I have the realtime volume/depot. Now I need the volume/depot at the end of each day last month. I'm stuck here,... See more...
Hi community, using Splunk for a ~month now and need some help,   If done correctly, I have the realtime volume/depot. Now I need the volume/depot at the end of each day last month. I'm stuck here, hope anyone can help me out.   index=BLA | eventstats latest(status) as END_status by order_id | search END_status IN ( "Product Arrived at Final Depot" "Processing Order at Depot" "Ready for Transit to Final Depot" "Processing Order for Client Delivery" "Product Ready for Client Delivery" "Ready for Transit to Nearest Depot" ) | where status=END_status | lookup products.csv product_id OUTPUT length, width, depth | eval "volume"=round(length*width*depth/1000000,2) | stats sum(volume) as "volume in m3" by depot_id  
Hi, I have data in XML format. Out of many fields that I have extracted, there is another field name pluginText which is in below format. I need to have some fields extracted from below. I need bel... See more...
Hi, I have data in XML format. Out of many fields that I have extracted, there is another field name pluginText which is in below format. I need to have some fields extracted from below. I need below two fields. Also, if there is a rex I can use to extract all fields in below tags using a universal logic, that will be great. Thanks in-advance!!! Nessus version Plugin feed version See sample below:   pluginText: <plugin_output>Information about this scan : Nessus version : 7.6.3 Plugin feed version : 202010122335 Scanner edition used : Sample Scan type : Windows Agent Scan policy used : Windows_Server_2019 Scanner IP : 0.0.0.0 Thorough tests : no Experimental tests : no Paranoia level : 1 Report verbosity : 1 Safe checks : yes Optimize the test : yes Credentialed checks : yes Patch management checks : None Display superseded patches : yes (supersedence plugin did not launch) CGI scanning : disabled Web application tests : disabled Max hosts : Max checks : 5 Recv timeout : 5 Backports : None Allow post-scan editing: Yes Scan duration : unknown </plugin_output>    
Hi all, I am currently planning and preparing the monitoring of a platform with Docker Swarm Clusters running on underlying Linux and Windows hosts. Collecting the docker logs seems straightforward... See more...
Hi all, I am currently planning and preparing the monitoring of a platform with Docker Swarm Clusters running on underlying Linux and Windows hosts. Collecting the docker logs seems straightforward, either with the Splunk logging driver sending it to the HEC or monitoring the logs with an UF on the host. (Please correct me if I'm wrong) For the docker metrics, I can use some help on the approach to collect this. I found the Splunk app for Infrastructure supports Docker monitoring but only for standalone Linux hosts or for Kubernetes or Openshift, but not for Docker Swarm. Does anyone know why this is not supporting Docker Swarm? Will this be added in a future release? I found the following solutions to get the Splunk metrics in to Splunk: This Splunk conference topic that looks interesting, but seems never fully productized (Why not?) https://conf.splunk.com/files/2017/slides/monitoring-docker-containers-with-splunk.pdf The only fully working solution looks like this third-party solution:  https://splunkbase.splunk.com/app/3723/ Then there is also the simple variant that is not advised for Docker Swarm clusters: https://splunkbase.splunk.com/app/4468/   Any advice on which path I should go for collecting the docker stats with Splunk? Thanks for you help!
Hi Team, We are using Splunk Cloud in our environment. And there is a requirement from our Security team to install the  below mentioned Add-On (OTX) into Splunk Cloud. https://splunkbase.splunk.co... See more...
Hi Team, We are using Splunk Cloud in our environment. And there is a requirement from our Security team to install the  below mentioned Add-On (OTX) into Splunk Cloud. https://splunkbase.splunk.com/app/4336/ When i checked it seems to be not supported with Splunk Cloud. So  we are having Splunk Heavy Forwarder running with 7.3.1 version. So can I install the Add-on into Heavy Forwarder ? Kindly confirm. Also if we can install then with the API key value can we ingest the logs into Splunk? Since i have the API key with me can you let me know with the configuration's stuffs. It will be really helpful if anyone has some documentation for the same.   
Hello, In my dashboard I have defined a multiselect field with the following possible values: dt1, dt2, dt3 and total Now, I would like to use them in my search in the aggregation functions (avg) ... See more...
Hello, In my dashboard I have defined a multiselect field with the following possible values: dt1, dt2, dt3 and total Now, I would like to use them in my search in the aggregation functions (avg) passing them with the kpi token. However I have an issue with the aggregation function themself as they are not able to pick up the VALUES of the newly created fields f1, .., fn. I was thinking of sth like below in my search:   index=mlbso sourcetype=webdispatcher | eval kpi = "dt3 total dt1 dt2 dt4" | rex field=kpi "(?P<f1>dt1|dt2|dt3|dt4|total) (?P<f2>dt1|dt2|dt3|dt4|total) (?P<f3>dt1|dt2|dt3|dt4|total) (?P<f4>dt1|dt2|dt3|dt4|total) (?P<f5>dt1|dt2|dt3|dt4|total)" | timechart span=15m avg(f1) as avg_server, avg(f2) as avg_total by "DBSID"     but the avg does not recognize the value of f1 and f2 as an argument.  How would I do this in the best way? Kind Regards, Kamil
Hi all. I am generating a dashboard table containing possible indicators of compromise observed on a network. Included in the search that generates the table is... | eval ActionText=if('model'="Wa... See more...
Hi all. I am generating a dashboard table containing possible indicators of compromise observed on a network. Included in the search that generates the table is... | eval ActionText=if('model'="Watchlisted domain","Check on Virus Total",(mvappend("Check on Virus Total","Add to Watchlist"))) Along with the rest of the search I end up with a table like this... ... | IoC               | ... | model                             | ActionText                     | ... | ... ------------------------------------------------------------------------------------- ... | <domain> | ... | Watchlisted domain | Check on Virus Total | ... | ...  ... | <domain> | ... | Suspicious domain   | Check on Virus Total | ... | ...                                                                                 Add to Watchlist           ... | <domain> | ... | Watchlisted domain | Check on Virus Total | ... | ...    I would like to configure a drilldown so that clicking on "Check on Virus Total" in the table will perform a GET request using the IoC field as a token, and a POST action to an internal API when I click on "Add to Watchlist", again using the IoC from the corresponding row/event. Any ideas for a starting point?
How can I combine these 3 queries given everything before pipe is same: query1: index=abc source="*/d/e/f.log" artifact_id=g*h*i* host!=“jkl*” cloud=mno consumer_id=* response_code=*|timechart span=... See more...
How can I combine these 3 queries given everything before pipe is same: query1: index=abc source="*/d/e/f.log" artifact_id=g*h*i* host!=“jkl*” cloud=mno consumer_id=* response_code=*|timechart span=1m count  query2: index=abc source="*/d/e/f.log" artifact_id=g*h*i* host!=“jkl*” cloud=mno consumer_id=* response_code=*|stats count(response_code) by response_code query3:index=abc source="*/d/e/f.log" artifact_id=g*h*i* host!=“jkl*” cloud=mno consumer_id=* response_code=*| stats avg(response_time) as "Avg Response Time" max(response_time) as "Max Response Time" p99(response_time) as "99 Percentile" p95(response_time) as "95 Percentile"
Hi Team, Currently we are using Splunk Cloud in our organization. i.e. We have a dedicated Search head for ES and another Adhoc search head for rest of the applications. So we need an additional se... See more...
Hi Team, Currently we are using Splunk Cloud in our organization. i.e. We have a dedicated Search head for ES and another Adhoc search head for rest of the applications. So we need an additional search head for development and testing purpose.  i.e. All the index logs should  be visible and searchable in the additional search head So do we need to contact Splunk Support on this  to host the same in Cloud and also we want to know will there be any cost involved as well. Or else do we have any option something like that we can build a new server in Azure and install the Splunk  package into it (Additional Search Head) and can we integrate with Splunk Cloud so that all the index logs should be searchable & visible. So that before going directly into production we will perform a testing in the development environment. We want a solution that should be cost effective since we majorly use it for development purpose. Kindly help to address on my query.    
Hi, Is there any way to Connect with AWS S3 Bucket from Splunk Cloud? We want to download one text/log file from AWS S3 Bucket to Local System on button click of Splunk HTML Dashboard. We are tryi... See more...
Hi, Is there any way to Connect with AWS S3 Bucket from Splunk Cloud? We want to download one text/log file from AWS S3 Bucket to Local System on button click of Splunk HTML Dashboard. We are trying with Python Script .Is there any pre built add on available? If we are going with Python Is Splunk-Python-SDK already available in Cloud?
  Hello everyone I want to add a constant prefix to all my indexes and then forward them this is my props.conf   props.conf [default] TRANSFORMS-index = rename-index     and here is my trans... See more...
  Hello everyone I want to add a constant prefix to all my indexes and then forward them this is my props.conf   props.conf [default] TRANSFORMS-index = rename-index     and here is my transforms.conf   transforms.conf [rename-index] SOURCE_KEY = _MetaData:Index REGEX = . FORMAT = foo-$1 DEST_KEY = _MetaData:Index     Actually, splunk rename all my indexes to foo-$1 while I want to rename my index to, for example, foo-eventlog, foo-iislog, and so on.   any help would be appreciated Thanks in advance
On forwarder, We have placed in the outputs.conf as default group of indexer so in this case all the logs by default forward to Indexer. But if i do not want to send specific logs to indexer which i... See more...
On forwarder, We have placed in the outputs.conf as default group of indexer so in this case all the logs by default forward to Indexer. But if i do not want to send specific logs to indexer which is in default group then what need to be done?     Please help
I am using the nix agent to gather disk space.  I only collect "df" information once per day. I want to be able to present a statistics table that only shows the rows with values.  When I do the quer... See more...
I am using the nix agent to gather disk space.  I only collect "df" information once per day. I want to be able to present a statistics table that only shows the rows with values.  When I do the query below I get alot of empty rows.  I'd like to only show the rows with data. index=os sourcetype=df host=hostname filesystem=*mapper*lim* | eval LIM_PROD_DISK=case( filesystem LIKE "%limproda1%", "limproda1", filesystem LIKE "%limproda2%", "limproda2", filesystem LIKE "%limprodwide0%", "limprodwide0", filesystem LIKE "%limprodwide1%", "limprodwide1", filesystem LIKE "%limprodwide2%", "limprodwide2", filesystem LIKE "%limtoolsvol%", "limptoolsvol" ) | bin _time span=1d | timechart max(storage_used_percent) by LIM_PROD_DISK