All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

tstats shows an error if I include a JSON field in "where" clause.  Same happens to CSV fields.  For example, if my source is like {"host": "<hostname>", "IP": "<IP address>"} and I do a search   ... See more...
tstats shows an error if I include a JSON field in "where" clause.  Same happens to CSV fields.  For example, if my source is like {"host": "<hostname>", "IP": "<IP address>"} and I do a search   | tstats count where IP = 10.0.0.1   Splunk displays "When used for 'tstats' searches, the 'WHERE' clause can contain only indexed fields. Ensure all fields in the 'WHERE' clause are indexed. Properly indexed fields should appear in fields.conf." The problem with fields.conf is that it doesn't deal with original data structure.  With JSON, there is always a chance that regex will not properly capture a field.  With CSV, failure rate is even higher. Is there some way to do tstats with structured sources? (I notice that despite the warning, tstats still performs OK.  But I'd rather users don't see such error message.)
How to I specify and earliest/latest search relative to the global time range selector. So if I choose 9/22/2022 in the global time range selector. I want my search to search from 2am to 3pm on... See more...
How to I specify and earliest/latest search relative to the global time range selector. So if I choose 9/22/2022 in the global time range selector. I want my search to search from 2am to 3pm on that day. When I specify earliest=@d+2h latests=@d+15h this completely overrides the global time selector and I get current time instead for the date from the global time range selector.
Has anyone worked with Splunk and any PDF document management software to enable the addition of Bates numbering and redaction for legal documents?    This is a unique use case and could be widel... See more...
Has anyone worked with Splunk and any PDF document management software to enable the addition of Bates numbering and redaction for legal documents?    This is a unique use case and could be widely utilized if successful.
Hello, I have a odd issue which seems to have been resolved but I would like to know the root cause of this issue. I inherited a splunk configuration with one of the stanza entries in inputs.conf ... See more...
Hello, I have a odd issue which seems to have been resolved but I would like to know the root cause of this issue. I inherited a splunk configuration with one of the stanza entries in inputs.conf being: [monitor:///var/log/messages*] sourcetype=syslog index = os disabled = 0 When I perform a ls -l on /var/log/messages* I get the below: -rw-------. 1 root root 7520499 Sep 23 07:15 messages -rw-------. 1 root root 4795535 Aug 28 01:45 messages-20220828 -rw-------. 1 root root 6636499 Sep 4 01:42 messages-20220904 ... When I do a spl search on any of the possible sources, since the stanza uses "*", I get no results except for the source=messages. I do not get results for the source=messages-20220828 (even if I extend the earliest=-365d). When the rsyslog executed and rotated the messages log file this past week, at about 2 am on saturday, splunk stopped indexing the messages log file. the messages log file kept being populated by linux so that side seems to be working as expected. the last log entry splunk recorded was: _time = 2022-09-18 01:46:40 _raw = Sep 18 01:46:40 ba-dev-web rsyslogd: [origin software="rsyslogd" swVersion="8.24.0-57.el7_9.3" x-pid="1899" x-info="http://www.rsyslog.com"] rsyslogd was HUPed I restarted the splunkforwarder on the server with the issue and this fix the issue and splunk started indexing the messages log entries again. To attempt to create a permanent solution to this issue because restarting the forwarder manually is not a adequate solution for this issue I created the below stanza: [monitor:///var/log/messages] index = test disabled = 0 I do not believe I need the "*" because 1) messages* sources are not being indexed by splunk, so why use "*". (only source=messages). 2) we do not need to index messages backup log files. When I came to work today, 18 hours after the "fix" (restart of splunk forwarder), my stanza is still working and indexing log entries as expected but the previous one: [monitor:///var/log/messages*] does not index log entries any more. I used the working one and determine that the last entries before splunk stopped indexing were: first column is _time and next column is _raw 2022-09-22 14:03:38 Sep 22 14:03:38 ba-prod-web audisp-remote: queue is full - dropping event 2022-09-22 14:03:38 Sep 22 14:03:38 ba-prod-web systemd: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. 2022-09-22 14:03:38 Sep 22 14:03:38 ba-prod-web systemd: Stopping Systemd service file for Splunk, generated by 'splunk enable boot-start'... 2022-09-22 14:03:38 Sep 22 14:03:38 ba-prod-web splunk: Dying on signal #15 (si_code=0), sent by PID 1 (UID 0) 2022-09-22 14:03:38 Sep 22 14:03:38 ba-qa-web audisp-remote: queue is full - dropping event 2022-09-22 14:03:37 Sep 22 14:03:37 ba-qa-web audisp-remote: queue is full - dropping event 2022-09-22 14:03:36 Sep 22 14:03:36 ba-qa-web audisp-remote: queue is full - dropping event the last entry for the stanza that stopped working was: 2022-09-22 14:03:37 Sep 22 14:03:37 ba-qa-web audisp-remote: queue is full - dropping event all the other monitor an dscripted inputs are working on that server except for the one above. the version of the forwarder is 7.2.3. I am running other forwarders with this version that are indexing messages log entries and they are working as expected. the stanza I used was a copy and paste from the Splunk_TA_nix add-on (except I removed the other log files and just used messages), so IMO this would be the bbest practices". I have a few questions: 1. why might be the reason why the stanza with "*' not work anymore while the one without it works? 2. Am I correct to believe that we do not need the stanza with "*", what are the consequences that I might not be aware of not using a stanza with "*"? 3. why would uid 1 (root) kill splunk (believe this is the reason why splunk stopped indexing messages log files again the 2nd time)? 4. any insights to understand this issue would be greatly apreciated. As far as i know right now, using my stanza should be good practice if we do not need the backup messages log files but I am concern I am missing something.  
Is it possible to add text to a chart? We have a couple of color blind users, and as far as our developers know, there's no way to add text directly on top of the chart values. I find it hard to beli... See more...
Is it possible to add text to a chart? We have a couple of color blind users, and as far as our developers know, there's no way to add text directly on top of the chart values. I find it hard to believe that Splunk doesn't have this basic accessibility feature. Would appreciate any input on this.  Screenshots of what is currently happening vs what we would like to achieve. Thanks.   Current This is what we would like to achieve
HI I am trying to monitor logs on a server. I have a UF in it and am trying to ./splunk add monitor. When I put the path, index and so on. I keep getting this error " PARAMETER NAME: PATH MUST BE... See more...
HI I am trying to monitor logs on a server. I have a UF in it and am trying to ./splunk add monitor. When I put the path, index and so on. I keep getting this error " PARAMETER NAME: PATH MUST BE A FILE OR DIRECTORY" I have gone thru tons of questions on here but no one answer this particular question.  Thanks for your help.
we are using ocp-4.10 deploying splunk/splunk:7.2.2 image but pod is going into crashbakloopoff state and in logs we got this error sh: /opt/container_artifact/splunk-container.state: Permission deni... See more...
we are using ocp-4.10 deploying splunk/splunk:7.2.2 image but pod is going into crashbakloopoff state and in logs we got this error sh: /opt/container_artifact/splunk-container.state: Permission denied could u pls helpout from this issue ASAP
we are doing splunk integartion with ocp-4.10 so need to install splunk but After installation of splunk getting error in logs like sudo: unable to send audit message: Operation not permitted aft... See more...
we are doing splunk integartion with ocp-4.10 so need to install splunk but After installation of splunk getting error in logs like sudo: unable to send audit message: Operation not permitted after execution tasks we arre getting this error so can pls help us out from this issue ASAP
when i was studying about macro i sometimes see that we put our arguments between '      ' and sometimes between $    $ does anyone know which we have to use brackets or dollar signs
Hello All, I am kind of new into splunk. Need help to remove "ms" from the below data sets even before this data is being indexed so that i can use tstats command on these values to create faster s... See more...
Hello All, I am kind of new into splunk. Need help to remove "ms" from the below data sets even before this data is being indexed so that i can use tstats command on these values to create faster searches. Could anyone please help with this. Sample data: 2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)   How do i remove the string "ms" after all the response times in this data.
 the transaction is identified as jsessionid .the spl query to find all transactions which lasted less than 5 sec : should i take : *|transaction jsessionid maxspan=5 or  *|transaction jsessi... See more...
 the transaction is identified as jsessionid .the spl query to find all transactions which lasted less than 5 sec : should i take : *|transaction jsessionid maxspan=5 or  *|transaction jsessionid timelimit=5 im finding it hard to see the different between them or we dont add maxspan or timelimit since it is less than 5 sec  
Hi, index=network sourcetype=cisco:asa NOT src_ip IN("10.0.0.0/8","10.0.0.1,"10.0.0.2") | bucket _time span=1m | stats dc(dest_port) as num_dest_port dc(dest_ip) as num_dest_ip by src_ip _time | ... See more...
Hi, index=network sourcetype=cisco:asa NOT src_ip IN("10.0.0.0/8","10.0.0.1,"10.0.0.2") | bucket _time span=1m | stats dc(dest_port) as num_dest_port dc(dest_ip) as num_dest_ip by src_ip _time | where num_dest_port > 500 OR num_dest_ip > 500 | eval Total=num_dest_port+num_dest_ip | sort -Total | dedup src_ip | With Lookups: index=network sourcetype=cisco:asa NOT src_ip IN("10.0.0.0/8","10.0.0.0/8","10.0.0.1,"10.0.0.2") | search NOT [|inputlookup Blocked_IP.csv] | fields src_ip | bucket _time span=1m | stats dc(dest_port) as num_dest_port dc(dest_ip) as num_dest_ip by src_ip _time | where num_dest_port > 500 OR num_dest_ip > 500 | eval Total=num_dest_port+num_dest_ip | sort -Total | dedup src_ip   I am not able to exclude the results from the Lookups or if I modify the search I'm not getting any results at all. Kindly help.
Hi All i am using the below query and it works fine. i.e how many emails were triggered to a Distribution list in a Month. sourcetype="ms:o365:reporting:messagetrace"   SenderAddress=*** Recipien... See more...
Hi All i am using the below query and it works fine. i.e how many emails were triggered to a Distribution list in a Month. sourcetype="ms:o365:reporting:messagetrace"   SenderAddress=*** RecipientAddress=*dl1@contoso.com* Status IN (*) subject="***" MessageId=*** | timechart span=1mon count I have the below requirement please guide me with query. How many email triggered to the DL dl1@contoso.com on a day and subject of that email and sender address and i want to schedule this report to the user user1@contoso.com on daily basis.
I need to round the max(Delay) and avg(Delay) to 3 decimals in the following command: my search | timechart span=5m avg(Delay) max(Delay) by host Thanks
Hi everyone, I use dbxquery and get this result from database: id count 123 12 456 24 478 6   Also I have a csv file already put  in lookup of Splunk lik... See more...
Hi everyone, I use dbxquery and get this result from database: id count 123 12 456 24 478 6   Also I have a csv file already put  in lookup of Splunk like this: id type 123 Machine 478 Machine 456 Food 987 Food 789 Toys   Please, how can I insert the column "type" from lookup to the search result above? Basically this is what I want to achieve: id count type 123 12 Machine 478 6 Machine 456 24 Food 987 0 Food 789 0 Toys I tried: |lookup lookupfile.csv id OUTPUT id type but it doesn't work Thanks, Julia
Our firewall logs shows twice on splunk. I configured rsyslog server with tcp. When I configure the log server with udp . Everythink  is okey. But tcp is problem. When I configured the log server 105... See more...
Our firewall logs shows twice on splunk. I configured rsyslog server with tcp. When I configure the log server with udp . Everythink  is okey. But tcp is problem. When I configured the log server 10514 tcp every duplicate.
Hi Team, We want to know the number of available agents, used and unused agents  and the available license.  Could you please help how to find that information. Thanks&Regards Srinivas
Hi Team, I have the event in the below format and want to extract the key-value pairs as fields. Please help extract fields from LogDate till the user.Thanks     { [-] event: INFO 2022-0... See more...
Hi Team, I have the event in the below format and want to extract the key-value pairs as fields. Please help extract fields from LogDate till the user.Thanks     { [-] event: INFO 2022-09-23 11:49:59,033 [[MuleRuntime].uber.01: [papi-ust-email-notification-v1-uw-qa].get:\ping:Router.CPU_LITE @6c1fb7] org.mule.runtime.core.internal.processor.LoggerMessageProcessor: { "LogDate": "09/23/2022 16:11:13.932", "LogNo": "99", "LogLevel": "INFO", "LogType": "Process Level", "LogMessage": "Splunk anypoint log", "TimeTaken": "0:00:12.628", "ProcessName": "AnypointSplunkTest", "TaskName": "AnypointTest", "RPAEnvironment": "DEV", "LogId": "002308900.20250824210419999", "MachineName": "abc-xyz-efg", "User": "name.first" } metaData: { [+] } }       and this is the raw text  {"metaData":{"sourceApiVersion":"1.0.0-SNAPSHOT","index":"aas","sourceApi":"papi-cust-email-notification-v1-uw-qa","cloudhubEnvironment":"AUTOMATION-QA","tags":""},"event":"INFO 2022-09-23 11:49:59,033 [[MuleRuntime].uber.01: [papi-cust-email-notification-v1-uw2-qa].get:\\ping:Router.CPU_LITE @6f3b7] org.mule.runtime.core.internal.processor.LoggerMessageProcessor: {\n \"LogDate\": \"09/23/2022 16:11:13.932\",\n \"LogNo\": \"99\",\n \"LogLevel\": \"INFO\",\n \"LogType\": \"Process Level\",\n \"LogMessage\": \"Splunk anypoint log\",\n \"TimeTaken\": \"0:00:12.628\",\n \"ProcessName\": \"AnypointSplunkTest\",\n \"TaskName\": \"AnypointTest\",\n \"RPAEnvironment\": \"DEV\",\n \"LogId\": \"002308900.20250824210419999\",\n \"MachineName\": \"abc-xyz-wd\",\n \"User\": \"name.first\"\n}"}
Hello Team, I am trying to migrate from classic to dashboard studio and facing issue with setting token. In classic I define it as a single value panel and it comes as a subscript <set token="m... See more...
Hello Team, I am trying to migrate from classic to dashboard studio and facing issue with setting token. In classic I define it as a single value panel and it comes as a subscript <set token="money_avg">$result.money_avg$</set> and  <option name="underLabel">+/- $money_avg$</option>.   In Studio I am unable to get this option in single value panel. (highlighted yellow).   Is there any workaround ?
Hi everyone,   I am attempting to implement some logic in my alert searches but I can't seem to figure out how to do it.   I have some event data coming into Splunk that I want to trigger a S... See more...
Hi everyone,   I am attempting to implement some logic in my alert searches but I can't seem to figure out how to do it.   I have some event data coming into Splunk that I want to trigger a Service Now incident creation using a priority value based on the event severity and the host environment (test, stage, prod, DR).   I am using a case statement to assign a severity ID depending on the alert severity: | eval severity_id=case(Severity=="critical", 6, Severity=="major", 5, 1==1, 3)   If I want to add a second condition to check the value of the hostEnvironment field before setting the severity ID what would be the best way to do this? E.G. If the severity = "critical" AND hostEnvironment = test then severity ID = 3. E.G. If the severity = "critical" AND hostEnvironment = prod then severity ID = 6 etc.  I am hoping there is a way to nest the comparison functions.    Thanks in advance.