All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Harikiranjammul   
Thanks. Probably this helps
Is this the sort of thing you are looking for? | tstats count by index source _time span=2h | stats list(count) as counts dc(_time) as frequency list(_time) as times by index source | where frequenc... See more...
Is this the sort of thing you are looking for? | tstats count by index source _time span=2h | stats list(count) as counts dc(_time) as frequency list(_time) as times by index source | where frequency>=12
Thank Kiran - do you agree it should work this way?
@KeithH       
I am running tstats command with span of 2hrs for index and source. It returns the data for every 2hrs. But I want to include the results only if it's available for every 2hrs in last 24hrs search.... See more...
I am running tstats command with span of 2hrs for index and source. It returns the data for every 2hrs. But I want to include the results only if it's available for every 2hrs in last 24hrs search. So basically which is not having continuous data, want to ignore it. How can I do this.  
How long is your search taking - you are searching a 61 minute window in your outer search and a 5 hour window in your append. Is the search in your dashboard part of a base search? How long do eac... See more...
How long is your search taking - you are searching a 61 minute window in your outer search and a 5 hour window in your append. Is the search in your dashboard part of a base search? How long do each of the individual searches take and if you put both of those individual searches into a dashboard as individual searches, so they give the correct result counts vs. running it as a search directly.
Do NOT use eval for anything to set any value to "StandardizedAddressService bla bla bla" That is data that I understand is coming from your event - when you make that eval statement you are setting... See more...
Do NOT use eval for anything to set any value to "StandardizedAddressService bla bla bla" That is data that I understand is coming from your event - when you make that eval statement you are setting a field called msgTxt (in your example) to the value you give it. What is the name of your field that contains that phrase in your index=... search  You should extract the "event" field using the rex statement specifying the field you want to extract from, e.g | rex field=msgTxt blablabla and then just use the original SPL I posted at the beginning | spath input=event | rename AddressDetails{}.* as *, WarningMessages{} as WarningMessages | table Latitude Longitude WarningMessages after that. If you get something unexpected, add some more fields to the table statement at the end to show what those fields are
Hello All ,  I am running one query  and exactly sme query I am trying to run from search but I am getting diff counts .  ```query for apigateway call``` index=aws_np earliest=1746540480 latest=174... See more...
Hello All ,  I am running one query  and exactly sme query I am trying to run from search but I am getting diff counts .  ```query for apigateway call``` index=aws_np earliest=1746540480 latest=1746544140 Method response body : sourcetype="aws:apigateway" | rex field=_raw "Method response body : (?<json>[^$]+)" | spath input=json path="header.messageID " output=messageID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 | rename _time as request_time ```dedupe is added to remove duplicates ``` | dedup messageID | append [ search index="aws_np" earliest=1746540480 latest=1746558480 | rex field=_raw "messageID \": String\(\"(?<messageID >[^\"]+)" | rex field=_raw "source\": String\(\"(?<source>[^\"]+)" | rex field=_raw "type\": String\(\"(?<type>[^\"]+)" | rex field=_raw "detail-type\": String\(\"(?<detail_type>[^\"]+)" | where source="XXX" and type="XXXXX" and detail_type="XXXX" | stats distinct_count( messageID ) as cnt_guid by messageID ,_time ``` by time is added because we are duplicate records of same time and guid ``` | stats count(cnt_guid) as published_count by messageID | dedup messageID | fields messageID , published_count ] | stats values(action) as request_type sum(published_count) as published_count2 by messageID | where isnotnull(request_type) | eventstats sum(published_count2) by request_type| dedup request_type | search request_type="Create" OR request_type="Update" | head 2 | fields sum(published_count2) request_type     So I ran query from dashboard panel  and then used RUN Search option to run it direclty but I am getting diff count . Search is giving correct result . Dashboard is giving less 
Thanks - will be interesting to see what others think.
Hi @KeithH  So - I have tested as you suggested and found the same results as you.  In terms of it beings a bug - I'm leaning towards agreeing with you on that front - its certainly unexpected user... See more...
Hi @KeithH  So - I have tested as you suggested and found the same results as you.  In terms of it beings a bug - I'm leaning towards agreeing with you on that front - its certainly unexpected user to the behaviour, however I'm wondering if its kind of intentional and possibly not actually affecting things under the hood.  The props.conf on this are a little hazy if you ask me, but basically if you are using LINE_BREAKER then SHOULD_LINEMERGE should be set to False *and* you should include a regex with a capture group which is used to determine the end of the first event and start of the second. It then says that when SHOULD_LINEMERGE is true, you should set one/many additional fields, one of which is BREAK_ONLY_BEFORE.  Obviously in your example you have SHOULD_LINEMERGE=true and a BREAK_ONLY_BEFORE but it kind of looks like Splunk is converting this to a SHOULD_LINEMERGE=false with a LINE_BREAKER of the BREAK_ONLY_BEFORE. Since SHOULD_LINEMERGE=false at this point, the BREAK_ONLY_BEFORE is presumably ignored?  At this point the LINE_BREAKER is "AAAA" as per your example, however it doesnt meet the spec documentation which states it should have a capture group in the Regex! So it feels like even if this is the case, it doesnt follow the docs and also seems like it makes some of these settings meaningless if you can just specify it in LINE_BREAKER afterall!?  *Something* isnt right - Support might be claiming that it isnt a bug because it works as intended, but I think this means that the docs are incorrect? If nothing else, the values rendered in the WebUI shouldnt change from the contents in the actual conf files without user interaction? I think some sort of warning/explanation is needed if that is the case! Anyway, lets see if others also have opinions on this and good luck with the bug/support case!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, Help please. Can I get people to agree with me that the following is a bug/design flaw - as my splunk case is getting no where.   Please try this, it only takes a moment, promise... In t... See more...
Hi All, Help please. Can I get people to agree with me that the following is a bug/design flaw - as my splunk case is getting no where.   Please try this, it only takes a moment, promise... In the splunk gui go to sourcetypes Click New Source Type give it a name - maybe test Click advanced Delete the  LINE_BREAKER setting Add New Setting:   BREAK_ONLY_BEFORE and set value to AAAA Check/Set SHOULD_LINEMERGE is true Save Run this search to confirm your settings look good: | rest /servicesNS/-/-/configs/conf-props| where title = "test" | fields BREAK_ONLY_BEFORE LINE_BREAKER SHOULD_LINEMERGE eai:acl.removable eai:acl.sharing eai:appName title​ In the search results confirm values have been saved as expected Re-edit the source type in the gui Click Advanced Notice that SHOULD_LINEMERGE has been changed to false and LINE_BREAKER has been returned and set to AAAA So the GUI is changing settings when a user re-edits the sourcetype.  Perhaps the user just wanted to changed the sourcetype descriptions and they saved that would mean the sourectype no longer works. I reckon this is a bug or design flaw but Splunk Support are trying to say it is expected behaviour. Please feel free to agree with Splunk Support if you think I am missing something. Thanks, Keith  
Hi @mchoudhary  Ultimately this depends on what has been extracted and if the data is CIM compliant. You can see all of the fields for the Network_Traffic data model at https://docs.splunk.com/Docu... See more...
Hi @mchoudhary  Ultimately this depends on what has been extracted and if the data is CIM compliant. You can see all of the fields for the Network_Traffic data model at https://docs.splunk.com/Documentation/CIM/6.0.4/User/NetworkTraffic Whilst there isnt a specific "error message" field - I think from memory you might find some use from the "rule" field which combined with the blocked action should give you some insight on the block reason. Try something like this: | tstats `security_content_summariesonly` count from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") All_Traffic.action="blocked" BY All_Traffic.rule sourcetype  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Crabbok  Check out the following - I think this should solve your usecase here, note that I have avoided transaction because its a pretty terrible command to use - I'd only recommend it if you a... See more...
Hi @Crabbok  Check out the following - I think this should solve your usecase here, note that I have avoided transaction because its a pretty terrible command to use - I'd only recommend it if you absolutely have to! Instead we can leverage stats.  Ive applied some further logic here to split events into individual sessions, so that if a user joins and leaves multiple times in the search window then it will show multiple sessions - see the example below:   The full SPL for this for you to try is: | makeresults format=csv data="Message,UserXXID,_time Info: User USER001 has joined the event session.,USER001,2025-05-06 22:20:03 Info: User USER002 has joined the event session.,USER002,2025-05-06 22:21:43 Info: User USER001 has left the event session.,USER001,2025-05-06 22:36:43 Info: User USER003 has joined the event session.,USER003,2025-05-06 22:40:03 Info: User USER002 has left the event session.,USER002,2025-05-06 22:53:23 Info: User USER003 has left the event session.,USER003,2025-05-06 23:01:43 Info: User USER001 has joined the event session.,USER001,2025-05-06 23:18:23" | rex field=Message "has (?<action>[a-zA-Z]+) the event session" | eval {action}_time=_time | sort UserXXID | streamstats count as userEventNum min(joined_time) as session_joined_time, max(left_time) as session_left_time by UserXXID reset_after="action=\"left\"" | eval action_time=strptime(_time, "%Y-%m-%d %H:%M:%S") | stats range(action_time) as sessionDurationSeconds, values(action) as actions, max(_time) as session_left_time by UserXXID session_joined_time  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The following link provides the common format for CEF log format, assuming that's your format. https://splunk.github.io/splunk-connect-for-syslog/main/sources/base/cef/#splunk-metadata-with-cef-e...
The following link provides the common format for CEF log format, assuming that's your format. https://splunk.github.io/splunk-connect-for-syslog/main/sources/base/cef/#splunk-metadata-with-cef-ev... See more...
The following link provides the common format for CEF log format, assuming that's your format. https://splunk.github.io/splunk-connect-for-syslog/main/sources/base/cef/#splunk-metadata-with-cef-events
HI @PickleRick we're using /event in the HEC endpoint ; but even with that some of the events are getting transformed (splitting as shared in screenshots earlier). 
Hi! I am creating a basic dashboard which shows the total number of firewall blocked for 3 sourcetypes using data model "network_traffic". Query is- | tstats `security_content_summariesonly` count ... See more...
Hi! I am creating a basic dashboard which shows the total number of firewall blocked for 3 sourcetypes using data model "network_traffic". Query is- | tstats `security_content_summariesonly` count from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") All_Traffic.action="blocked" Now I am trying to add one more panel which will show what is causing the block activity (error message) for each sourcetype with respect to count, but I am unable to figure out the appropriate field (or query) from the data model which is related to the error message. Can someone help me understand which field to group by to get the error message. P.S I am new to splunk
I'm trying to track the duration of user sessions to a server.   I want to know WHICH users are connecting, and for how long each session is.   The problem is, with multiple users, I'm having nested ... See more...
I'm trying to track the duration of user sessions to a server.   I want to know WHICH users are connecting, and for how long each session is.   The problem is, with multiple users, I'm having nested transactions happen , where USER001 joins, but USER004 Leaves, and that creates an event.   I want it to ONLY look at only scenarios in which the same user that Joins, Leaves.   I can't seem to get it to do this.   Eventcode=44  is the event code for these particular events I want to track UserXXID is a Field Extraction I've built to show each userID, as it is not a standard username that Splunk automatically understood.   The two primary types of logs I'm looking for is when they've "joined" or "left" the event.   Here is the command I'm using -  host="XXComputer04" EventCode=44 | transaction startswith="joined" endswith="left" |eval Hours=duration/3600 |timechart count by UserXXID   Sample of the log entry I'm trying to parse.   LogName=Application EventCode=120 EventType=44 ComputerName=XXcomputer004 SourceName=EventXService Type=Information RecordNumber=1234427 Keywords=Classic TaskCategory=State Transition OpCode=Info Message= [0x0x]::ProcessLeave()  (xxUSER002xx) left event33001 --------- I have also tried simply - |transaction USERXXID to keep unique userID's together - and while that works, it then somehow ignores ALL "left event" messages and only shows "joined" for any given user.   Any help would be appreciated!
This is the result but it is still not what I am looking for. I have been trying some stuff on my end as well and I got an result that is close to what I am looking for but not.  index=os sourc... See more...
This is the result but it is still not what I am looking for. I have been trying some stuff on my end as well and I got an result that is close to what I am looking for but not.  index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" | timechart span=1m eval((sum(RSZ_KB)/1024/1024)) as Mem_Used_GB by USER useother=no WHERE max in top20 | sort USER desc | head 20 This is the result. It is displaying the results in the way I am looking for, just not the right results. I am looking for the middle 20 instead of the top 20 or bottom 20. Is there an way or command to just display the middle 20 using the search query above?