All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We have an application pulling search results from a scheduled search using Splunk API periodically, but encountering an issue where there is an excess of expired jobs (5000+) which are bein... See more...
Hello, We have an application pulling search results from a scheduled search using Splunk API periodically, but encountering an issue where there is an excess of expired jobs (5000+) which are being kept for 1 month+ for some reason. Because the application has to look through each of these jobs it's taking too long and timing out.  We tried deleting the expired jobs through the UI but they keep popping back up/not going away. Some of these now say "Invalid SID" when I try to inspect them. Is there any way we can clear these bulk, preferable without resorting to UI (which only shows 50 at a time)? 
I have some passive dns data that has time stamps that look like this in JSON logs: {"timestamp":"2021-10-21 16:31:01","timestamp_s":1634833861,"timestamp_ms":973448,  So it has first conventional ... See more...
I have some passive dns data that has time stamps that look like this in JSON logs: {"timestamp":"2021-10-21 16:31:01","timestamp_s":1634833861,"timestamp_ms":973448,  So it has first conventional time stamp and then a full seconds based Unix Epoch Time Stamp in seconds followed by: timestamp_ms":990877 This has the millsecs of the time only (actually microseconds).  The more convention time would have been: timestamp_s":1634834347.990877  I have not been able to get the time to include the millisec value included so far.  I am using a TIME_PREFIX that should skip the conventional timestamp.   Most recently, I used SEDCMD to get the time stamp to look more normal for epoch time --- timestamp_s":1634834347.990877,  but maybe the SEDCMD only happens after the time stamp is determined. I have used similar to for this. TIME_PREFIX=timestamp_s": TIME_FORMAT= %s.%6N Any help appreciated !       
We just stood up a new distributed deployment with 3 indexers and a CM. I was able to connect 1 indexer to the CM successfully but when I was trying to connect the other 2 indexers to it, I was getti... See more...
We just stood up a new distributed deployment with 3 indexers and a CM. I was able to connect 1 indexer to the CM successfully but when I was trying to connect the other 2 indexers to it, I was getting the error "Could not contact manager. Check that the manager is up, the manager_uri=https://xxxxxxx:8089 and secret are specified correctly. I know the secret is right and it is the correct uri, firewalld is disabled, I am able to netcat to the host via 8089, indexer GUIDs are unique.  Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to xxxxxx:8089. Ncat: 0 bytes sent, 0 bytes received in 0.02 seconds.
Hello, I can estimate the # of average events SPLUNK has for an index/sourcetype using following line of query /codes. How I would estimate the average Volume of data (in MB) SPLUNK receives per our... See more...
Hello, I can estimate the # of average events SPLUNK has for an index/sourcetype using following line of query /codes. How I would estimate the average Volume of data (in MB) SPLUNK receives per our for that index.  Thank you so much, appreciate your support. Query to Estimate # of Ave Events per hour: index=win_test sourcetype=* |bucket _time span=1h|stats count by _time|stats avg(count) as "Ave Events per Hour"    
Hi I have the following command in my query    My splunk search | eval message=IF((like(source,"ABC%") OR like(source,"DEF%")) AND avg_latency>120 ,"Host with more than 2 minutes Latency","")    ... See more...
Hi I have the following command in my query    My splunk search | eval message=IF((like(source,"ABC%") OR like(source,"DEF%")) AND avg_latency>120 ,"Host with more than 2 minutes Latency","")     where avg_latency is a field with values but for some reason the above condition is not working for me.    Could someone check if there is any format issue on my eval condition and let me know how I can make it correct?
@Kenshiro70  I have just read your most brilliant answer hear https://community.splunk.com/t5/Splunk-Search/What-exactly-are-the-rules-requirements-for-using-quot-tstats/m-p/319801 I have applied i... See more...
@Kenshiro70  I have just read your most brilliant answer hear https://community.splunk.com/t5/Splunk-Search/What-exactly-are-the-rules-requirements-for-using-quot-tstats/m-p/319801 I have applied it to a one use case, but I am a little stuck now on another use case and I was hoping you might be able to give me 5 minutes, please.  The following code is working. I have used it to replace a join. The issue is when I need to add a third mstats. There are just some rules i can't see to understand or crack etc... Any help would be  great - cheers. It is when I am adding additional ""by" clause "used.by".   I supost the really question is how to handle this when there are multiple BY form different | mstats         | mstats append=t prestats=t min("mx.service.status") min(mx.service.dependencies.status) min(mx.service.resources.status) min("mx.service.deployment.status") max("mx.service.replicas") WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" "service.type" | mstats append=t prestats=t max("mx.service.replicas") WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 service.type IN (agent-based launcher-based) span=10s BY service.name | eval forked="" | mstats append=t prestats=t min("mx.service.deployment.status") max("mx.service.replicas") WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" "service.type" forked | mstats append=t prestats=t min(mx.service.dependencies.status) WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s | rename service.name as Service_Name,service.type as Service_Type | stats max("mx.service.replicas") as replicas min("mx.service.deployment.status") as Deployment min("mx.service.status") as Status_numeric min(mx.service.dependencies.status) as Dependencies min(mx.service.resources.status) as Resources by _time Service_Name Service_Type forked | sort 0 _time Service_Name             Working   This is the code that is not working. I added in a "used.by" in the first tstats as it is needed for min(mx.service.dependencies.status) -  However as soon as i add this i loose a lot of data           | mstats append=t prestats=t min(mx.service.dependencies.status) min("mx.service.deployment.status") max("mx.service.replicas") WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" "service.type" "used.by" | eval forked="" | mstats append=t prestats=t min("mx.service.deployment.status") max("mx.service.replicas") WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" "service.type" "forked" | mstats append=t prestats=t max("mx.service.replicas") WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 service.type IN (agent-based launcher-based) span=10s BY service.name | rename service.name as Service_Name,service.type as Service_Type | stats min("mx.service.deployment.status") as Deployment min(mx.service.dependencies.status) as Dependencies_x max("mx.service.replicas") as replicas by _time Service_Name Service_Type forked "used.by" | sort 0 - Service_Name _time           NOt working  
Hello, I have an index and 3 custom sourcetypes built in place, Suppose if the source wants to stream logs into Splunk, do i need to create 3 HEC tokens? I can see when i am trying to create HEC inp... See more...
Hello, I have an index and 3 custom sourcetypes built in place, Suppose if the source wants to stream logs into Splunk, do i need to create 3 HEC tokens? I can see when i am trying to create HEC inputs, it is asking me to select sourcetype where i can only select one sourcetype. Please help me with this situation.   Thanks
Hello.  I am running 8.2.2 on Linux.  We have four clustered indexers and are using SmartStore.  I would like to empty an index (and recover the disk space).  I have thus chosen to remove the old_dat... See more...
Hello.  I am running 8.2.2 on Linux.  We have four clustered indexers and are using SmartStore.  I would like to empty an index (and recover the disk space).  I have thus chosen to remove the old_data index from the cluster, then add it back again.  I have performed these steps: 1. Stop any data being sent to the index. 2. Edit indexes.conf and delete the index's stanza (via the CM) then apply the changes to the peer nodes (each restarts). 3. Remove the index's directories from each peer node. 4. Check on the SHC for events in the index (index=old_data); no events are returned (all time). 5. Once the cluster shows that all indexes are 'green', re-add the index as normnal (editing indexes.conf again and applying the update). However, now searching the index on the SHC returns some/most of the events.  My guess is that the cache manager / the S3 storage also needs to be purged.   If so, how is this best achieved? I have avoided using index=old_data | delete because I understand this will only mask the data from searches (and I want the disk space back too). Many thanks for your time.
Hi, I would like to count the values of a multivalue field by value. For example:   | makeresults | eval values_type=split( "value1,value2,value1,value2,value1,value2,value1,value2,value2,value2,... See more...
Hi, I would like to count the values of a multivalue field by value. For example:   | makeresults | eval values_type=split( "value1,value2,value1,value2,value1,value2,value1,value2,value2,value2,value2,",",") | eval values_count=mvcount(values_type) | eval value1=mvfilter(match(values_type,"value1")) | eval value1_count=mvcount(value1) | eval value2_count=values_count - value1_count | table values_type message_count values_count value1_count value2_count Is there another way to do this? For example, if I don't know the possible values, this way doesn't work. Thanks in advance  
Analyze yarn logs on the Hadoop cluster by using Splunk yarn logs are stored different nodes in the Hadoop cluster. For this requirement what are the configuration required  should install splunk ... See more...
Analyze yarn logs on the Hadoop cluster by using Splunk yarn logs are stored different nodes in the Hadoop cluster. For this requirement what are the configuration required  should install splunk forwarder on all the nodes or edge node ? what are the configurations required. Thanks 
Hello All, I have a use case to consume alerts from a tool called dataminr into splunk. Can someone suggest us the best approach for this integration?     Thanks
Hi, I have installed Jira issues collector add-on to onboard the jira logs in splunk. configuration is done and I am able to see the logs in splunk .But my jira board contains 4685 events but I can ... See more...
Hi, I have installed Jira issues collector add-on to onboard the jira logs in splunk. configuration is done and I am able to see the logs in splunk .But my jira board contains 4685 events but I can see only 1000 events in splunk. I have read somewhere that this add-on will limit only 1000 events at a time. ca anyone help me how to onboard other events Thanks  
I am attempting to use an HEC with basic authentication via HTTPS, but receiving a response 403 "Forbidden" when using the authorization header as Base64 encoded username:password pair.  The usernam... See more...
I am attempting to use an HEC with basic authentication via HTTPS, but receiving a response 403 "Forbidden" when using the authorization header as Base64 encoded username:password pair.  The username:HEC token works as is hinted in the documentation, so my question is whether there is any way to use a user's password for authentication, or a session key from a login request, when posting data to an HEC.   If not, are there any endpoints that will return a response on an HTTP request? Thanks in advance for any advice you can give.  
I am trying to send the following WMI winevent log event to the Null queue as it needs to be dropped.But this dosn't seems to be working. Can someone help me on this? I have configured the props & t... See more...
I am trying to send the following WMI winevent log event to the Null queue as it needs to be dropped.But this dosn't seems to be working. Can someone help me on this? I have configured the props & transform in Heavy Forwarder like- props.conf [source::WinEventLog:Microsoft-Windows-WMI-Activity/Operational] TRANSFORMS-null = wmi-setnull transforms.conf [wmi-setnull] REGEX =((.|\n)*)EventCode=5857\s+((.|\n)*)ProviderPath\s+=\s+(%systemroot%\\system32\\wbem\\(wmiprov\.dll|ntevt\.dll|wmiprvsd\.dll)|C:\\Windows\\(System32\\wbem\\krnlprov\.dll|CCM\\ccmsdkprovider\.dll)|C:\\Program\sFiles\\(Microsoft\sSQL\sServer\\.*\\Shared\\sqlmgmprovider\.dll|VMware\\VMware Tools\\vmStatsProvider\\win64\\vmStatsProvider\.dll)) DEST_KEY = queue FORMAT = nullQueue  
Hello to everyone, my issue is that when I use sendemail in a scheduled search to send results via email in csv format, columns in the csv are not in the same order I tabled them in the search. For... See more...
Hello to everyone, my issue is that when I use sendemail in a scheduled search to send results via email in csv format, columns in the csv are not in the same order I tabled them in the search. For example: <some_search> | table field1 field2 field3 | outputcsv TestInvioMail_searchOutput.csv | stats values(recipient) AS emailToHeader | mvexpand emailToHeader | map search="|inputcsv TestInvioMail_searchOutput.csv | where recipient=$emailToHeader$ | sendemail sendresults=true sendcsv=true server=<my_email_server_address> from=<sender_server_address> to=$emailToHeader$ subject=\"Some object\" message=\"Some message\"" | append [|inputcsv TestInvioMail_searchOutput.csv] I need to use also map command because I have to send different results to different recipients, since inserting recipient token in the Splunk's mail alert panel doesn't work. Sendemail works fine, every recipient receives the correct results, but they receive a csv in which fields are in a different order respect to the one specified in table command (for example in the csv column order is field2 field3 field1). I also tried to add width_sort_columns=<bool> parameter in sendemail command (after sendcsv=true) but without success. Do you have any suggestion? Thanks in advance.
Hi, I have a query below with a join condition .The issue is if I am hardcoding name value I am getting the result but when I'm removing it, not seeing any results plus I m getting this error in scre... See more...
Hi, I have a query below with a join condition .The issue is if I am hardcoding name value I am getting the result but when I'm removing it, not seeing any results plus I m getting this error in screenshot. Validated that it is not because of space issue .Can somebody suggest?
Is bucket repair on an index cluster any different from non-clustered indexers?  Should splunkd be running on the cluster master? Should it be in maintenance mode? When using network storage, shou... See more...
Is bucket repair on an index cluster any different from non-clustered indexers?  Should splunkd be running on the cluster master? Should it be in maintenance mode? When using network storage, should it be mounted to all of the indexers or only one? Is the fsck command run from the cluster master or from one of the indexers?
Hi Everyone,                Can someone please help OR let me know the steps on how to send clear events from Splunk to Service Now?     For Ex: Hung threads showing for the first 15 minutes from a... See more...
Hi Everyone,                Can someone please help OR let me know the steps on how to send clear events from Splunk to Service Now?     For Ex: Hung threads showing for the first 15 minutes from an alert and sending an events to Service Now for a particular CI and the next 15 minutes no hung threads found from an alert
Hi, The Jenkins addon was installed and everything is working, but there is a broken link when clicking on the Splunk button. When I'm in the main dashboard when I click the Splunk button and I h... See more...
Hi, The Jenkins addon was installed and everything is working, but there is a broken link when clicking on the Splunk button. When I'm in the main dashboard when I click the Splunk button and I have the overview results: /app/splunk_app_jenkins/overview?overview_jenkinsmaster=master But when I go into the job itself and click the button, I get error 404: /app/splunk_app_jenkins/build_analysis?build_analysis_jenkinsmaster=jenkins&build_analysis_job=job_name Jenkins Splunk 1.9.7 Splunk App for Jenkins 2.0.4   Where may be the problem? Thanks   
We have a central syslog server that is being used to push paloalto logs to along with some other devices, each host has its own folder on the syslog server where data for that particular host is sto... See more...
We have a central syslog server that is being used to push paloalto logs to along with some other devices, each host has its own folder on the syslog server where data for that particular host is stored.  From a splunk POV we are a cloud hosted customer.  I have today installed the palo alto app for Splunk and wondering on the best way to achieve the below.  As the data is coming into the index=syslog and sourcetype=syslog the inputs on the app are not working as is expecting particular sourcetypes pan_logs  as an example. Is it possible to override and redirect the PA hosts from the syslog stream to the correct index and sourcetype ?    Is it possible to filter out the