All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

HI @SN1  This largely depends on the implementation of your dashboard - Please could you share your existing dashboard code so that we can try and make this work for you.  Did this answer help yo... See more...
HI @SN1  This largely depends on the implementation of your dashboard - Please could you share your existing dashboard code so that we can try and make this work for you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
so i have a dashboard with 4 panels and there is checkbox with 2 options of solved and unsolved , so for unsolved the colour of the panels should remain red when the count is greater than 0. which i ... See more...
so i have a dashboard with 4 panels and there is checkbox with 2 options of solved and unsolved , so for unsolved the colour of the panels should remain red when the count is greater than 0. which i am able to do with splunk dashboard feature itself. But for solved option every panels should be green . so how should i approach this.
@livehybrid , Thanks much for your prompt response on this.   The API shared for getting roles assigned to individual users is not working and also I am not able to find this API in the document. T... See more...
@livehybrid , Thanks much for your prompt response on this.   The API shared for getting roles assigned to individual users is not working and also I am not able to find this API in the document. There is only APIs for getting complete Roles and not specific to user   GET /controller/api/rbac/v1/users/{userId}/roles - to retrieve the roles associated with a specific user   Could you check and let me know on this.     
I have this kind of weird custom app (and dangerous too) that changes the UF Instance GUID.  Basically, I created a .sh file, which utilizes "sed" command on Linux, to change the UUID value of the /o... See more...
I have this kind of weird custom app (and dangerous too) that changes the UF Instance GUID.  Basically, I created a .sh file, which utilizes "sed" command on Linux, to change the UUID value of the /opt/splunkforwarder/etc/instance.cfg file. To use a .sh script and make changes to SPLUNK_HOME directory is quite a dangerous task, I advised not to, however, this task is quite simple, I tested so I decided to deploy an app called REGEN_GUID with a single inputs.conf file that have the stanza to run the script. [script://./bin/regenerate_guid.sh] interval = 900 source = regenerate_guid sourcetype = regenerate_guid index = <REDACTED> disabled = 0 In general, quite simple, and it run. I could change the instance UUID and nothing critical happened. However, of course after I see that the UUID has been changed, I would remove the client from the app. I used the deployment server UI, go into the app section and remove the IP of the instance from the whitelist. Checking the splunkd.log, I could see the log when it say it is removing the app However, after that, I check again and see the log and see it is still finding the script to run, the log appear every 15 minutes, which is equal to the script interval, so basically the UF is still interpreting the task of running the script. The log is like this: 05-07-2025 11:00:07.938 +0700 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/REGEN_GUID/bin/regenerate_guid.sh" /bin/sh: 1: /opt/splunkforwarder/etc/apps/REGEN_GUID/bin/regenerate_guid.sh: not found Does anyone know the reason? I think the reason might be the way Splunk monitor script inputs is through some kinds of cron file, and my app failed to update that when it was removed?
You may want to check this out too.   https://community.splunk.com/t5/Getting-Data-In/HEC-timestamp-recognition/m-p/537762
@KeithHYes, I agree. As @livehybrid  mentioned, a warning or some form of explanation would be appropriate in this situation.
@Harikiranjammul   
Thanks. Probably this helps
Is this the sort of thing you are looking for? | tstats count by index source _time span=2h | stats list(count) as counts dc(_time) as frequency list(_time) as times by index source | where frequenc... See more...
Is this the sort of thing you are looking for? | tstats count by index source _time span=2h | stats list(count) as counts dc(_time) as frequency list(_time) as times by index source | where frequency>=12
Thank Kiran - do you agree it should work this way?
@KeithH       
I am running tstats command with span of 2hrs for index and source. It returns the data for every 2hrs. But I want to include the results only if it's available for every 2hrs in last 24hrs search.... See more...
I am running tstats command with span of 2hrs for index and source. It returns the data for every 2hrs. But I want to include the results only if it's available for every 2hrs in last 24hrs search. So basically which is not having continuous data, want to ignore it. How can I do this.  
How long is your search taking - you are searching a 61 minute window in your outer search and a 5 hour window in your append. Is the search in your dashboard part of a base search? How long do eac... See more...
How long is your search taking - you are searching a 61 minute window in your outer search and a 5 hour window in your append. Is the search in your dashboard part of a base search? How long do each of the individual searches take and if you put both of those individual searches into a dashboard as individual searches, so they give the correct result counts vs. running it as a search directly.
Do NOT use eval for anything to set any value to "StandardizedAddressService bla bla bla" That is data that I understand is coming from your event - when you make that eval statement you are setting... See more...
Do NOT use eval for anything to set any value to "StandardizedAddressService bla bla bla" That is data that I understand is coming from your event - when you make that eval statement you are setting a field called msgTxt (in your example) to the value you give it. What is the name of your field that contains that phrase in your index=... search  You should extract the "event" field using the rex statement specifying the field you want to extract from, e.g | rex field=msgTxt blablabla and then just use the original SPL I posted at the beginning | spath input=event | rename AddressDetails{}.* as *, WarningMessages{} as WarningMessages | table Latitude Longitude WarningMessages after that. If you get something unexpected, add some more fields to the table statement at the end to show what those fields are
Hello All ,  I am running one query  and exactly sme query I am trying to run from search but I am getting diff counts .  ```query for apigateway call``` index=aws_np earliest=1746540480 latest=174... See more...
Hello All ,  I am running one query  and exactly sme query I am trying to run from search but I am getting diff counts .  ```query for apigateway call``` index=aws_np earliest=1746540480 latest=1746544140 Method response body : sourcetype="aws:apigateway" | rex field=_raw "Method response body : (?<json>[^$]+)" | spath input=json path="header.messageID " output=messageID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 | rename _time as request_time ```dedupe is added to remove duplicates ``` | dedup messageID | append [ search index="aws_np" earliest=1746540480 latest=1746558480 | rex field=_raw "messageID \": String\(\"(?<messageID >[^\"]+)" | rex field=_raw "source\": String\(\"(?<source>[^\"]+)" | rex field=_raw "type\": String\(\"(?<type>[^\"]+)" | rex field=_raw "detail-type\": String\(\"(?<detail_type>[^\"]+)" | where source="XXX" and type="XXXXX" and detail_type="XXXX" | stats distinct_count( messageID ) as cnt_guid by messageID ,_time ``` by time is added because we are duplicate records of same time and guid ``` | stats count(cnt_guid) as published_count by messageID | dedup messageID | fields messageID , published_count ] | stats values(action) as request_type sum(published_count) as published_count2 by messageID | where isnotnull(request_type) | eventstats sum(published_count2) by request_type| dedup request_type | search request_type="Create" OR request_type="Update" | head 2 | fields sum(published_count2) request_type     So I ran query from dashboard panel  and then used RUN Search option to run it direclty but I am getting diff count . Search is giving correct result . Dashboard is giving less 
Thanks - will be interesting to see what others think.
Hi @KeithH  So - I have tested as you suggested and found the same results as you.  In terms of it beings a bug - I'm leaning towards agreeing with you on that front - its certainly unexpected user... See more...
Hi @KeithH  So - I have tested as you suggested and found the same results as you.  In terms of it beings a bug - I'm leaning towards agreeing with you on that front - its certainly unexpected user to the behaviour, however I'm wondering if its kind of intentional and possibly not actually affecting things under the hood.  The props.conf on this are a little hazy if you ask me, but basically if you are using LINE_BREAKER then SHOULD_LINEMERGE should be set to False *and* you should include a regex with a capture group which is used to determine the end of the first event and start of the second. It then says that when SHOULD_LINEMERGE is true, you should set one/many additional fields, one of which is BREAK_ONLY_BEFORE.  Obviously in your example you have SHOULD_LINEMERGE=true and a BREAK_ONLY_BEFORE but it kind of looks like Splunk is converting this to a SHOULD_LINEMERGE=false with a LINE_BREAKER of the BREAK_ONLY_BEFORE. Since SHOULD_LINEMERGE=false at this point, the BREAK_ONLY_BEFORE is presumably ignored?  At this point the LINE_BREAKER is "AAAA" as per your example, however it doesnt meet the spec documentation which states it should have a capture group in the Regex! So it feels like even if this is the case, it doesnt follow the docs and also seems like it makes some of these settings meaningless if you can just specify it in LINE_BREAKER afterall!?  *Something* isnt right - Support might be claiming that it isnt a bug because it works as intended, but I think this means that the docs are incorrect? If nothing else, the values rendered in the WebUI shouldnt change from the contents in the actual conf files without user interaction? I think some sort of warning/explanation is needed if that is the case! Anyway, lets see if others also have opinions on this and good luck with the bug/support case!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, Help please. Can I get people to agree with me that the following is a bug/design flaw - as my splunk case is getting no where.   Please try this, it only takes a moment, promise... In t... See more...
Hi All, Help please. Can I get people to agree with me that the following is a bug/design flaw - as my splunk case is getting no where.   Please try this, it only takes a moment, promise... In the splunk gui go to sourcetypes Click New Source Type give it a name - maybe test Click advanced Delete the  LINE_BREAKER setting Add New Setting:   BREAK_ONLY_BEFORE and set value to AAAA Check/Set SHOULD_LINEMERGE is true Save Run this search to confirm your settings look good: | rest /servicesNS/-/-/configs/conf-props| where title = "test" | fields BREAK_ONLY_BEFORE LINE_BREAKER SHOULD_LINEMERGE eai:acl.removable eai:acl.sharing eai:appName title​ In the search results confirm values have been saved as expected Re-edit the source type in the gui Click Advanced Notice that SHOULD_LINEMERGE has been changed to false and LINE_BREAKER has been returned and set to AAAA So the GUI is changing settings when a user re-edits the sourcetype.  Perhaps the user just wanted to changed the sourcetype descriptions and they saved that would mean the sourectype no longer works. I reckon this is a bug or design flaw but Splunk Support are trying to say it is expected behaviour. Please feel free to agree with Splunk Support if you think I am missing something. Thanks, Keith  
Hi @mchoudhary  Ultimately this depends on what has been extracted and if the data is CIM compliant. You can see all of the fields for the Network_Traffic data model at https://docs.splunk.com/Docu... See more...
Hi @mchoudhary  Ultimately this depends on what has been extracted and if the data is CIM compliant. You can see all of the fields for the Network_Traffic data model at https://docs.splunk.com/Documentation/CIM/6.0.4/User/NetworkTraffic Whilst there isnt a specific "error message" field - I think from memory you might find some use from the "rule" field which combined with the blocked action should give you some insight on the block reason. Try something like this: | tstats `security_content_summariesonly` count from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") All_Traffic.action="blocked" BY All_Traffic.rule sourcetype  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Crabbok  Check out the following - I think this should solve your usecase here, note that I have avoided transaction because its a pretty terrible command to use - I'd only recommend it if you a... See more...
Hi @Crabbok  Check out the following - I think this should solve your usecase here, note that I have avoided transaction because its a pretty terrible command to use - I'd only recommend it if you absolutely have to! Instead we can leverage stats.  Ive applied some further logic here to split events into individual sessions, so that if a user joins and leaves multiple times in the search window then it will show multiple sessions - see the example below:   The full SPL for this for you to try is: | makeresults format=csv data="Message,UserXXID,_time Info: User USER001 has joined the event session.,USER001,2025-05-06 22:20:03 Info: User USER002 has joined the event session.,USER002,2025-05-06 22:21:43 Info: User USER001 has left the event session.,USER001,2025-05-06 22:36:43 Info: User USER003 has joined the event session.,USER003,2025-05-06 22:40:03 Info: User USER002 has left the event session.,USER002,2025-05-06 22:53:23 Info: User USER003 has left the event session.,USER003,2025-05-06 23:01:43 Info: User USER001 has joined the event session.,USER001,2025-05-06 23:18:23" | rex field=Message "has (?<action>[a-zA-Z]+) the event session" | eval {action}_time=_time | sort UserXXID | streamstats count as userEventNum min(joined_time) as session_joined_time, max(left_time) as session_left_time by UserXXID reset_after="action=\"left\"" | eval action_time=strptime(_time, "%Y-%m-%d %H:%M:%S") | stats range(action_time) as sessionDurationSeconds, values(action) as actions, max(_time) as session_left_time by UserXXID session_joined_time  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing