All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Could you log in as the Splunk user on your indexer and then run btool for the stanzas relating the TLS-secured forwarding? /opt/splunk/bin/splunk btool inputs list SSL /opt/splunk/bin/splunk btool ... See more...
Could you log in as the Splunk user on your indexer and then run btool for the stanzas relating the TLS-secured forwarding? /opt/splunk/bin/splunk btool inputs list SSL /opt/splunk/bin/splunk btool inputs list splunktcp-ssl /opt/splunk/bin/splunk btool server list sslConfig Make sure that the settings are set according to the instructions in the article. If they are the wrong values, then add --debug to the btool commands to find the file which is setting the command. If there are no problems there, then do you find specific complaints in the splunkd log of the forwarder? E.g. "Invalid certificate", or does the connection time out? Have you been able to forward logs, even _internal logs, before setting up TLS?
When you press the "New Token" button, then do you see the "New Token" menu which contains the fields: User, Audience, Expiration, Not Before, and then a field for "Token" which has the warning that ... See more...
When you press the "New Token" button, then do you see the "New Token" menu which contains the fields: User, Audience, Expiration, Not Before, and then a field for "Token" which has the warning that the token will appear there only once upon creation? When you press "Create", then the "Token" field will contain the token itself, not the token ID. In the Tokens page, you will see the tokens listed, and the first column will be their token ID
It is possible to use props.conf settings on your indexer machines to pre-process the JSON into distinct events for each transaction, but I will assume that you instead have that one json object as a... See more...
It is possible to use props.conf settings on your indexer machines to pre-process the JSON into distinct events for each transaction, but I will assume that you instead have that one json object as a single event in Splunk. You can then use the following search: <Your search for finding the json event> ``` Chop off the first and last brackets ``` | rex field=_raw mode=sed "s/^{//" | rex field=_raw mode=sed "s/}$//" ``` Add a "SplitHere" keyword to target with a makemv command ``` | rex field=_raw mode=sed "s/},/},SPLITHERE/g" max_match=99 ``` Remove the Transaction1 etc. labels for each sub-object ``` | rex field=_raw mode=sed "s/\s*\"Transaction\d*\"\s:\s//g" max_match=99 ``` To avoid making _raw a multivalue lets eval it to the "a" field ``` | eval a = _raw ``` Split 'a' into multiple values and table it ``` | makemv a delim=",SPLITHERE" | mvexpand a | table a ``` Extract the key values for each json object ``` | spath input=a ``` Filter to desired fields and make it into final table with renaming and rounding ``` | table transaction pct2ResTime | rename transaction as "Transaction Name" | eval pct2ResTime = round(pct2ResTime)
Have you tried using spath?
Thank you for the update @PaulPanther . As advised, i removed the collect command from my search query. Even then, I am not able to get the events in the summary index. This search is scheduled... See more...
Thank you for the update @PaulPanther . As advised, i removed the collect command from my search query. Even then, I am not able to get the events in the summary index. This search is scheduled to run every hour, even then the latest events, I could see is 10 days ago and not 1 hour. Scheduled report is not ingesting events to the summary index as i could observe.  
I'm new to Splunk and trying to display table in the below format after reading data from json. Could someone help me with the splunk query. Transaction Name pct2ResTime Transaction 1  4198 ... See more...
I'm new to Splunk and trying to display table in the below format after reading data from json. Could someone help me with the splunk query. Transaction Name pct2ResTime Transaction 1  4198 Transaction 2 1318 Transaction 3 451 JSON file name: statistics.json {   “Transaction1” : {     "transaction" : "Transaction1”,     "pct1ResTime" : 3083.0,     "pct2ResTime" : 4198.0,     "pct3ResTime" : 47139.0   },   "Transaction2” : {     "transaction" : "Transaction2”,     "pct1ResTime" : 1151.3000000000002,     "pct2ResTime" : 1318.8999999999996,     "pct3ResTime" : 6866.0   },   "Transaction3” : {     "transaction" : "Transaction3”,     "pct1ResTime" : 342.40000000000003,     "pct2ResTime" : 451.49999999999983,     "pct3ResTime" : 712.5799999999997   } }
Alerts are based on searches. Searches do _not_ have to be based on indexes. You could event do a repeated daily search to detect the DST change. But the question is why use Splunk for it in the firs... See more...
Alerts are based on searches. Searches do _not_ have to be based on indexes. You could event do a repeated daily search to detect the DST change. But the question is why use Splunk for it in the first place.
Actually modern Splunk lets you specify data directly in the makeresults command. So you can directly append | makeresults annotate=f format=csv data="index,count index1,0 index2,0 ... indexn,0"... See more...
Actually modern Splunk lets you specify data directly in the makeresults command. So you can directly append | makeresults annotate=f format=csv data="index,count index1,0 index2,0 ... indexn,0" | fields - _time
@devsru wrote: Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query... See more...
@devsru wrote: Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query. Lets say run at 2 PM the day before with the message. Ok - so am I to assume the rule is the 4th Sunday of those months or is this more difficult like the last Sunday of those months.  There needs to be a rule or common theme to identify each year in the future, unless a governing body just randomly decides each year then I can't script for that. | eval Sunday=strftime(relative_time(strptime(FirstOfMonth, "%Y-%m-%d"),"+2w@w0"), "%Y-%m-%d") | eval Match=if((Sunday=DayOfYear AND (strftime(round(relative_time(now(), "-0y@y"))+((count-1)*86400),"%m")=="03" OR strftime(round(relative_time(now(), "-0y@y"))+((count-1)*86400),"%m")=="11") ),"TRUE","FALSE") The eval for Sunday=... contains '+2w@w0' which indicates the second week @ weekday of 0 which in this case is Sunday (1=Monday, etc....). The eval for Match= has many AND OR statements but the '==03' and '==11' just needs to be updated to match your month in question. The entire search I gave you will only identify the two days where DST changes occur.  You need to add an additional calculation to say is today or now() the day before either of the DST change results.  If TRUE then result == 1, if FALSE then result == 0 (result being any variable name of your choosing).  Once you have that search working and verified you can setup an Alert action that results in email delivery if result value > 0.  That alert action search can be schedule to run every Saturday for every week. Set it once and forget about it as it should work year after year.  That said good maintenance is to on a reoccurring bases verify the search still matches your local DST rules and that destination mailing list still exists and contains the appropriate user base.
Hi @StephenD1, You can try below shorter version of  @gcusello solution; | tstats count where index IN (_internal, index2, index3, index4, index5) BY index | append [ makeresults | eval ... See more...
Hi @StephenD1, You can try below shorter version of  @gcusello solution; | tstats count where index IN (_internal, index2, index3, index4, index5) BY index | append [ makeresults | eval index="index1,index2,index3,index4,index5" | eval index=split(index,",") | mvexpand index ] | stats sum(count) AS total BY index
Hi,  I have created a new token under Settings > Access Tokens And by right I should be getting a token ID to be copied immediately (for use elsewhere). However after creating and waiting on multi... See more...
Hi,  I have created a new token under Settings > Access Tokens And by right I should be getting a token ID to be copied immediately (for use elsewhere). However after creating and waiting on multiple tokens, I cannot see this token ID to be copied anywhere.   Could I get some help with knowing where or how to copy this token ID?  Thank you!
This should help: https://docs.splunk.com/Documentation/ES/7.3.2/Admin/Useintelinsearch
Hi @wtaddis , what's your issue? change the grants on the dashboard and the knowledge objects used by it. Ciao. Giuseppe
Hi @StephenD1 , you can use the solution from @richgalloway  or (if the indexes to monitor are few) modify your search in: | tstats count where index IN (index1, index2, index3, index4, index5) BY... See more...
Hi @StephenD1 , you can use the solution from @richgalloway  or (if the indexes to monitor are few) modify your search in: | tstats count where index IN (index1, index2, index3, index4, index5) BY index | append [ | makeresults | eval index=index1, count=0 | fields index count) ] | append [ | makeresults | eval index=index2, count=0 | fields index count) ] | append [ | makeresults | eval index=index3, count=0 | fields index count) ] | append [ | makeresults | eval index=index4, count=0 | fields index count) ] | append [ | makeresults | eval index=index5, count=0 | fields index count) ] | stats sum(count) AS total BY index | where total=0 or create a lookup (called e.g. perimeter.csv) containing the list of indexes to monitor and run: | tstats count where index IN (index1, index2, index3, index4, index5) BY index | append [ | inputlookup perimeter.csv | eval count=0 | fields index count) ] | stats sum(count) AS total BY index | where total=0 Ciao. Giuseppe
I, too, am having this problem.  We are working from this document: https://splunk.github.io/splunk-connect-for-syslog/2.30.1/troubleshooting/troubleshoot_resources/
The splunk(-company)-wrapped syslog-ng service, "Splunk Connect for Syslog" (AKA SC4S) comes standard with a systemd unit file that reaches out on every startup to github to obtain the latest contain... See more...
The splunk(-company)-wrapped syslog-ng service, "Splunk Connect for Syslog" (AKA SC4S) comes standard with a systemd unit file that reaches out on every startup to github to obtain the latest container image.  This had worked flawlessly since we first setup syslog inputs for the client.  However years later, somebody made a WAF change that blocked connectivity to github, which included our download URL round in the unit file (specifically, ghcr.io/splunk/splunk-connect-for-syslog/container3:latest) and did not properly warn or socialize this fact before doing so. This caused the sc4s service to be unable to restart because the systemd unit file downloads a fresh image every time before it starts, which it could no longer do. WARNING, if you setup SC4S the normal way, then you did so as user "root" so you will need to do all of this as user "root" also. The most immediate solution is to see if there is still an older image around to run by using this command: docker image ls You should see something like this: REPOSITORY                                                                                           TAG         IMAGE ID                 CREATED     SIZE    ghcr.io/splunk/splunk-connect-for-syslog/container2:2     latest     SomeImageID2     SomeDate    SomeSizeGB If there is, you can modify the unit file by copying the "IMAGE ID" value (in this case "SomeImageID2") and changing this line: Environment="SC4S_IMAGE=https://ghcr.io/splunk/splunk-connect-for-syslog/container2:2:latest" To this: Environment="SC4S_IMAGE=SomeImageID2" And also commenting out this line, like this: #ExecStartPre=/usr/bin/docker pull $SC4S_IMAGE Then you need to reload systemd like this: systemctl daemon-reload This should allow you to start your service immediately as normal: service sc4s start Now you have the problem of how do you get the latest image manually (now that the automatic download cannot work) which according to this link: https://splunk.github.io/splunk-connect-for-syslog/main/upgrade/ is now this: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest The following link gave us all of what we need but we had to do it a few times with various options mined from the comments to get it eactly right: https://stackoverflow.com/questions/37905763/how-do-i-download-docker-images-without-using-the-pull-command You will first have to install docker someplace that CAN get to the image URL.  If you can run a broswer there, just post the value in your browser and it should redirect to an actual page.  If you only have the CLI there, just use curl to test like this: curl ghcr.io/splunk/splunk-connect-for-syslog/container3:latest In our case, we just installed docker on a Windows laptop and then opened powershell to run these 2 commands: docker pull ghcr.io/splunk/splunk-connect-for-syslog/container3:latest docker image ls You should see something like this: REPOSITORY                                                                                       TAG         IMAGE ID                 CREATED     SIZE    ghcr.io/splunk/splunk-connect-for-syslog/container3     latest     SomeImageID3     SomeDate    SomeSizeGB Next you need to export the image to a file like this: docker save SomeImageID3 --output DockerImageSC4S.tar Then transfer this to "/tmp" on your SC4S server host however you please and load it like this: docker load -i /tmp/DockerImageSC4S.tar Then, of course, you need to re-modify the unit file using the new "SomeImageID3" value instead of "SomeImageID2".
Have you look this answer https://community.splunk.com/t5/Alerting/Throttle-alerts-based-on-field-value/m-p/172536 if it fulfills your needs?
If you want results by URI then don’t put anything else after by. But then the results is not same as you have with your current query. .... | stats values(status) as status .... by URI just replac... See more...
If you want results by URI then don’t put anything else after by. But then the results is not same as you have with your current query. .... | stats values(status) as status .... by URI just replace …. before by with those other fields what you want to see. But probably this is not what you are looking for? Can you told what you need to know, not how you try to do it?
Thanks for response, There is a field called URI some URI is coming as duplicate. how i can adjust query so that duplicate URI won't come? 
1. You're using a very old Splunk version judging by the screenshot. 2. Your initial search is very, very ineffective (Splunk cannot use its index of terms, it has to look through every single event... See more...
1. You're using a very old Splunk version judging by the screenshot. 2. Your initial search is very, very ineffective (Splunk cannot use its index of terms, it has to look through every single event to find your sought for ones) 3. What do you mean by "filter duplicate URL"? You're counting by a triplet - url, status and lob, whatever that is. So you'll get a separate result for each combination of those three values. If you don't want to break it down by status, don't include that field in the BY clause.