All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Have you tried using spath?
Thank you for the update @PaulPanther . As advised, i removed the collect command from my search query. Even then, I am not able to get the events in the summary index. This search is scheduled... See more...
Thank you for the update @PaulPanther . As advised, i removed the collect command from my search query. Even then, I am not able to get the events in the summary index. This search is scheduled to run every hour, even then the latest events, I could see is 10 days ago and not 1 hour. Scheduled report is not ingesting events to the summary index as i could observe.  
I'm new to Splunk and trying to display table in the below format after reading data from json. Could someone help me with the splunk query. Transaction Name pct2ResTime Transaction 1  4198 ... See more...
I'm new to Splunk and trying to display table in the below format after reading data from json. Could someone help me with the splunk query. Transaction Name pct2ResTime Transaction 1  4198 Transaction 2 1318 Transaction 3 451 JSON file name: statistics.json {   “Transaction1” : {     "transaction" : "Transaction1”,     "pct1ResTime" : 3083.0,     "pct2ResTime" : 4198.0,     "pct3ResTime" : 47139.0   },   "Transaction2” : {     "transaction" : "Transaction2”,     "pct1ResTime" : 1151.3000000000002,     "pct2ResTime" : 1318.8999999999996,     "pct3ResTime" : 6866.0   },   "Transaction3” : {     "transaction" : "Transaction3”,     "pct1ResTime" : 342.40000000000003,     "pct2ResTime" : 451.49999999999983,     "pct3ResTime" : 712.5799999999997   } }
Alerts are based on searches. Searches do _not_ have to be based on indexes. You could event do a repeated daily search to detect the DST change. But the question is why use Splunk for it in the firs... See more...
Alerts are based on searches. Searches do _not_ have to be based on indexes. You could event do a repeated daily search to detect the DST change. But the question is why use Splunk for it in the first place.
Actually modern Splunk lets you specify data directly in the makeresults command. So you can directly append | makeresults annotate=f format=csv data="index,count index1,0 index2,0 ... indexn,0"... See more...
Actually modern Splunk lets you specify data directly in the makeresults command. So you can directly append | makeresults annotate=f format=csv data="index,count index1,0 index2,0 ... indexn,0" | fields - _time
@devsru wrote: Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query... See more...
@devsru wrote: Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query. Lets say run at 2 PM the day before with the message. Ok - so am I to assume the rule is the 4th Sunday of those months or is this more difficult like the last Sunday of those months.  There needs to be a rule or common theme to identify each year in the future, unless a governing body just randomly decides each year then I can't script for that. | eval Sunday=strftime(relative_time(strptime(FirstOfMonth, "%Y-%m-%d"),"+2w@w0"), "%Y-%m-%d") | eval Match=if((Sunday=DayOfYear AND (strftime(round(relative_time(now(), "-0y@y"))+((count-1)*86400),"%m")=="03" OR strftime(round(relative_time(now(), "-0y@y"))+((count-1)*86400),"%m")=="11") ),"TRUE","FALSE") The eval for Sunday=... contains '+2w@w0' which indicates the second week @ weekday of 0 which in this case is Sunday (1=Monday, etc....). The eval for Match= has many AND OR statements but the '==03' and '==11' just needs to be updated to match your month in question. The entire search I gave you will only identify the two days where DST changes occur.  You need to add an additional calculation to say is today or now() the day before either of the DST change results.  If TRUE then result == 1, if FALSE then result == 0 (result being any variable name of your choosing).  Once you have that search working and verified you can setup an Alert action that results in email delivery if result value > 0.  That alert action search can be schedule to run every Saturday for every week. Set it once and forget about it as it should work year after year.  That said good maintenance is to on a reoccurring bases verify the search still matches your local DST rules and that destination mailing list still exists and contains the appropriate user base.
Hi @StephenD1, You can try below shorter version of  @gcusello solution; | tstats count where index IN (_internal, index2, index3, index4, index5) BY index | append [ makeresults | eval ... See more...
Hi @StephenD1, You can try below shorter version of  @gcusello solution; | tstats count where index IN (_internal, index2, index3, index4, index5) BY index | append [ makeresults | eval index="index1,index2,index3,index4,index5" | eval index=split(index,",") | mvexpand index ] | stats sum(count) AS total BY index
Hi,  I have created a new token under Settings > Access Tokens And by right I should be getting a token ID to be copied immediately (for use elsewhere). However after creating and waiting on multi... See more...
Hi,  I have created a new token under Settings > Access Tokens And by right I should be getting a token ID to be copied immediately (for use elsewhere). However after creating and waiting on multiple tokens, I cannot see this token ID to be copied anywhere.   Could I get some help with knowing where or how to copy this token ID?  Thank you!
This should help: https://docs.splunk.com/Documentation/ES/7.3.2/Admin/Useintelinsearch
Hi @wtaddis , what's your issue? change the grants on the dashboard and the knowledge objects used by it. Ciao. Giuseppe
Hi @StephenD1 , you can use the solution from @richgalloway  or (if the indexes to monitor are few) modify your search in: | tstats count where index IN (index1, index2, index3, index4, index5) BY... See more...
Hi @StephenD1 , you can use the solution from @richgalloway  or (if the indexes to monitor are few) modify your search in: | tstats count where index IN (index1, index2, index3, index4, index5) BY index | append [ | makeresults | eval index=index1, count=0 | fields index count) ] | append [ | makeresults | eval index=index2, count=0 | fields index count) ] | append [ | makeresults | eval index=index3, count=0 | fields index count) ] | append [ | makeresults | eval index=index4, count=0 | fields index count) ] | append [ | makeresults | eval index=index5, count=0 | fields index count) ] | stats sum(count) AS total BY index | where total=0 or create a lookup (called e.g. perimeter.csv) containing the list of indexes to monitor and run: | tstats count where index IN (index1, index2, index3, index4, index5) BY index | append [ | inputlookup perimeter.csv | eval count=0 | fields index count) ] | stats sum(count) AS total BY index | where total=0 Ciao. Giuseppe
I, too, am having this problem.  We are working from this document: https://splunk.github.io/splunk-connect-for-syslog/2.30.1/troubleshooting/troubleshoot_resources/
The splunk(-company)-wrapped syslog-ng service, "Splunk Connect for Syslog" (AKA SC4S) comes standard with a systemd unit file that reaches out on every startup to github to obtain the latest contain... See more...
The splunk(-company)-wrapped syslog-ng service, "Splunk Connect for Syslog" (AKA SC4S) comes standard with a systemd unit file that reaches out on every startup to github to obtain the latest container image.  This had worked flawlessly since we first setup syslog inputs for the client.  However years later, somebody made a WAF change that blocked connectivity to github, which included our download URL round in the unit file (specifically, ghcr.io/splunk/splunk-connect-for-syslog/container3:latest) and did not properly warn or socialize this fact before doing so. This caused the sc4s service to be unable to restart because the systemd unit file downloads a fresh image every time before it starts, which it could no longer do. WARNING, if you setup SC4S the normal way, then you did so as user "root" so you will need to do all of this as user "root" also. The most immediate solution is to see if there is still an older image around to run by using this command: docker image ls You should see something like this: REPOSITORY                                                                                           TAG         IMAGE ID                 CREATED     SIZE    ghcr.io/splunk/splunk-connect-for-syslog/container2:2     latest     SomeImageID2     SomeDate    SomeSizeGB If there is, you can modify the unit file by copying the "IMAGE ID" value (in this case "SomeImageID2") and changing this line: Environment="SC4S_IMAGE=https://ghcr.io/splunk/splunk-connect-for-syslog/container2:2:latest" To this: Environment="SC4S_IMAGE=SomeImageID2" And also commenting out this line, like this: #ExecStartPre=/usr/bin/docker pull $SC4S_IMAGE Then you need to reload systemd like this: systemctl daemon-reload This should allow you to start your service immediately as normal: service sc4s start Now you have the problem of how do you get the latest image manually (now that the automatic download cannot work) which according to this link: https://splunk.github.io/splunk-connect-for-syslog/main/upgrade/ is now this: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest The following link gave us all of what we need but we had to do it a few times with various options mined from the comments to get it eactly right: https://stackoverflow.com/questions/37905763/how-do-i-download-docker-images-without-using-the-pull-command You will first have to install docker someplace that CAN get to the image URL.  If you can run a broswer there, just post the value in your browser and it should redirect to an actual page.  If you only have the CLI there, just use curl to test like this: curl ghcr.io/splunk/splunk-connect-for-syslog/container3:latest In our case, we just installed docker on a Windows laptop and then opened powershell to run these 2 commands: docker pull ghcr.io/splunk/splunk-connect-for-syslog/container3:latest docker image ls You should see something like this: REPOSITORY                                                                                       TAG         IMAGE ID                 CREATED     SIZE    ghcr.io/splunk/splunk-connect-for-syslog/container3     latest     SomeImageID3     SomeDate    SomeSizeGB Next you need to export the image to a file like this: docker save SomeImageID3 --output DockerImageSC4S.tar Then transfer this to "/tmp" on your SC4S server host however you please and load it like this: docker load -i /tmp/DockerImageSC4S.tar Then, of course, you need to re-modify the unit file using the new "SomeImageID3" value instead of "SomeImageID2".
Have you look this answer https://community.splunk.com/t5/Alerting/Throttle-alerts-based-on-field-value/m-p/172536 if it fulfills your needs?
If you want results by URI then don’t put anything else after by. But then the results is not same as you have with your current query. .... | stats values(status) as status .... by URI just replac... See more...
If you want results by URI then don’t put anything else after by. But then the results is not same as you have with your current query. .... | stats values(status) as status .... by URI just replace …. before by with those other fields what you want to see. But probably this is not what you are looking for? Can you told what you need to know, not how you try to do it?
Thanks for response, There is a field called URI some URI is coming as duplicate. how i can adjust query so that duplicate URI won't come? 
1. You're using a very old Splunk version judging by the screenshot. 2. Your initial search is very, very ineffective (Splunk cannot use its index of terms, it has to look through every single event... See more...
1. You're using a very old Splunk version judging by the screenshot. 2. Your initial search is very, very ineffective (Splunk cannot use its index of terms, it has to look through every single event to find your sought for ones) 3. What do you mean by "filter duplicate URL"? You're counting by a triplet - url, status and lob, whatever that is. So you'll get a separate result for each combination of those three values. If you don't want to break it down by status, don't include that field in the BY clause.
Trying to get success and failure status count using below query but its not filtering out the duplicate URLs, Can someone help me into this? I want result in less number of rows but lob, URI, API_St... See more...
Trying to get success and failure status count using below query but its not filtering out the duplicate URLs, Can someone help me into this? I want result in less number of rows but lob, URI, API_Status  and its count should show. "*/prescriptions/eni/api/api-cw/*" (URI != "*/prescriptions/eni/api/api-cw/legacySession/cache*") | stats count by lob,URI,API_Staus Result is coming as below,  
sourcetype names are xml  and raw_text. That's why I mentioned that you need to adjust as I did not know your sourcetype names.  For this particular purpose, you can simply tweak sourcetype name t... See more...
sourcetype names are xml  and raw_text. That's why I mentioned that you need to adjust as I did not know your sourcetype names.  For this particular purpose, you can simply tweak sourcetype name to represent CopyLocation and TargetLocation - or any other name you want to use in foreach. | eval sourcetype = if(sourcetype == "raw_text", "CopyLocation", "TargetLocation") (Without mucking sourcetype value, you can also use "foreach raw_text xml" and get the correct results, then rename the fields.) Here is a complete emulation | makeresults | eval sourcetype = "raw_text", data = mvappend("2024-12-18 17:02:50, file_name=\"XYZ.csv\", file copy success", "2024-12-18 17:02:58, file_name=\"ABC.zip\", file copy success", "2024-12-18 17:03:38, file_name=\"123.docx\", file copy success", "2024-12-18 18:06:19, file_name=\"143.docx\", file copy success") | mvexpand data | eval _time = strptime(replace(data, ",.+", ""), "%F %T") | rename data AS _raw | extract | append [makeresults | eval sourcetype = xml, _raw = "2024-12-18 17:30:10 <FileTransfer status=\"success\"> <FileName>XYZ.csv</FileName> <FileName>ABC.zip</FileName> <FileName>123.docx</FileName> </FileTransfer>" | eval _time = strptime(replace(_raw, "<.+", ""), "%F %T")] ``` the above emulates sourcetype IN (CopLocation, TargetLocation) ``` | eval sourcetype = if(sourcetype == "raw_text", "CopyLocation", "TargetLocation") | eval target_log = replace(_raw, "^[^<]+", "") | spath input=target_log | mvexpand FileTransfer.FileName | eval FileName = coalesce(file_name, 'FileTransfer.FileName') | chart values(_time) over FileName by sourcetype | sort CopyLocation | foreach *Location [eval <<FIELD>> = strftime(<<FIELD>>, "%F %T")] | fillnull TargetLocation value=Pending  
Let me first try to understand the problem: You want to find servers whose end state is offline, but whose immediate previous reported state is not offline, i.e., those whose state newly becomes offl... See more...
Let me first try to understand the problem: You want to find servers whose end state is offline, but whose immediate previous reported state is not offline, i.e., those whose state newly becomes offline.  Is this correct?  In other words, given these mock events _time host state_desc 2024-12-20 18:00 host1 not online 2024-12-20 16:00 host2 not online 2024-12-20 14:00 host3 ONLINE 2024-12-20 12:00 host4 not online 2024-12-20 10:00 host0 not online 2024-12-20 08:00 host1 ONLINE 2024-12-20 06:00 host2 not online 2024-12-20 04:00 host3 not online 2024-12-20 02:00 host4 ONLINE 2024-12-20 00:00 host0 not online 2024-12-19 22:00 host1 not online 2024-12-19 20:00 host2 ONLINE 2024-12-19 18:00 host3 not online 2024-12-19 16:00 host4 not online 2024-12-19 14:00 host0 ONLINE 2024-12-19 12:00 host1 not online 2024-12-19 10:00 host2 not online 2024-12-19 08:00 host3 ONLINE 2024-12-19 06:00 host4 not online 2024-12-19 04:00 host0 not online 2024-12-19 02:00 host1 ONLINE 2024-12-19 00:00 host2 not online 2024-12-18 22:00 host3 not online 2024-12-18 20:00 host4 ONLINE 2024-12-18 18:00 host0 not online 2024-12-18 16:00 host1 not online 2024-12-18 14:00 host2 ONLINE 2024-12-18 12:00 host3 not online 2024-12-18 10:00 host4 not online 2024-12-18 08:00 host0 ONLINE 2024-12-18 06:00 host1 not online 2024-12-18 04:00 host2 not online 2024-12-18 02:00 host3 ONLINE 2024-12-18 00:00 host4 not online 2024-12-17 22:00 host0 not online You want alert on host1 and host4 only. To do this with streamstats, you will need to sort events this way and that.  I usually consider them costs. (And I am quite fuzzy in streamstats:-)  So, I consider this one of few good uses of transaction.  Something like   index=os_pci_windowsatom host IN (HostP1 HostP2 HostP3 HostP4) source=cnt_mx_pci_sql_*_status_db | transaction host endswith=state_desc=ONLINE keepevicted=true | search eventcount = 1 state_desc != ONLINE   Here is an emulation of the mock data for you to play with and compare with real data.   | makeresults count=35 | streamstats count as state_desc | eval _time = relative_time(_time - state_desc * 7200, "-0h@h") | eval host = "host" . state_desc % 5, state_desc = if(state_desc % 3 > 0, "not online", "ONLINE") ``` the above emulates index=os_pci_windowsatom host IN (HostP1 HostP2 HostP3 HostP4) source=cnt_mx_pci_sql_*_status_db ```   Output from the search is _time closed_txn duration eventcount field_match_sum host linecount state_desc 2024-12-20 18:00 0 0 1 1 host1 1 not online 2024-12-20 12:00 0 0 1 1 host4 1 not online The rest of your search is simply manipulation of display string.