All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jovnice , I hint to add index=wineventlog because gives you better performnces that the following solution! anyway, if you don't want this olution, you could add the wineventlog index to the de... See more...
Hi @jovnice , I hint to add index=wineventlog because gives you better performnces that the following solution! anyway, if you don't want this olution, you could add the wineventlog index to the default search path (in [Settings > Roles> <your_role> > Indexes]. Ciao. Giuseppe
Hi Gary Just to make sure I understand you correctly you have a webpage, with the underlying requests of these pages making ajax calls to different domains correct? Or are you referring to incomi... See more...
Hi Gary Just to make sure I understand you correctly you have a webpage, with the underlying requests of these pages making ajax calls to different domains correct? Or are you referring to incoming traffic to the web page coming from 2 different domains? Also 140 api calls in total is relatively small and should easily be manageable, can you possibly explain or provide screenshots of what is  becoming too cluttered to better understand the problem you are facing to better answer your question? Ciao
As PickleRick write, why not use the 500MB/day Free license? 5GB/30 = 166MB/day, much less then 500MB Free. The Free license do have some limitation that may give problem. Another solution is to... See more...
As PickleRick write, why not use the 500MB/day Free license? 5GB/30 = 166MB/day, much less then 500MB Free. The Free license do have some limitation that may give problem. Another solution is to point all the small Splunk setup to the same license.  Then they will all use of the same 5GB license.  
@SplunkUser5 - Yes @jotne is right about transforms.conf issue.   But if you want to exclude at the input level. This is a common issue I come across all the time and I keep forgetting again and ag... See more...
@SplunkUser5 - Yes @jotne is right about transforms.conf issue.   But if you want to exclude at the input level. This is a common issue I come across all the time and I keep forgetting again and again that is Windows path requires extra backslashes in the regex sometimes.   Try: C:\\\Users\\\.*\\\AppData\\\Local\\\Microsoft\\\Teams\\\current   (try the 4 backslash version as well, as I'm not sure which one will work. I always have to do try and error between 2, 3, and 4 backslashes.)   I hope this helps!!! Kindly upvote if it does!!!
@AL3Z - Sample reference query | tstats count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Processes.parent_process_name=wmiprvse.exe OR Processes.parent_p... See more...
@AL3Z - Sample reference query | tstats count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Processes.parent_process_name=wmiprvse.exe OR Processes.parent_process_name=services.exe OR Processes.parent_process_name=svchost.exe OR Processes.parent_process_name=wsmprovhost.exe OR Processes.parent_process_name=mmc.exe) (Processes.process_name=powershell.exe OR (Processes.process_name=cmd.exe AND Processes.process=*powershell.exe*) OR Processes.process_name=pwsh.exe OR (Processes.process_name=cmd.exe AND Processes.process=*pwsh.exe*)) by Processes.dest Processes.user Processes.parent_process_name Processes.process_name Processes.process Processes.process_id Processes.parent_process_id | rename Processes.* as * | eval firstTime = strftime(firstTime, "%F %T") | eval lastTime = strftime(lastTime, "%F %T")   FYI, this is just one sample example to detect lateral movement in powershell. Lateral Movement is broad topic, so please refer to my original answer.   I hope this helps!!! Kindly upvote if it does!!!
You regex:   REGEX = "Teams\.exe<\/Data>"   does not hit your input data due to the quote. Do not quote your regex in transforms.conf   REGEX = Teams\.exe<\/Data>  
@jovnice - Please specify index. If you don't know the index, run this search for a longer time range, something like the last 7 days or so.   index=* source="*WinEventLog:Application"   Try this... See more...
@jovnice - Please specify index. If you don't know the index, run this search for a longer time range, something like the last 7 days or so.   index=* source="*WinEventLog:Application"   Try this search and see if you see any results. Once you see any results then you can add more search criteria.   I hope this helps!!! Kindly upvote if this helps!!
See sort.  | sort 0 Score desc is semantically identical to | sort limit=0 Score desc. But | sort - 0 Score is equivalent to | sort 0, Score desc.  That is, you are sorting two fields, 0 and Score, i... See more...
See sort.  | sort 0 Score desc is semantically identical to | sort limit=0 Score desc. But | sort - 0 Score is equivalent to | sort 0, Score desc.  That is, you are sorting two fields, 0 and Score, in descending order and without using limit. Sort is memory hungry.  Setting 10,000 by default is a sensible choice.
@bowesmana thanks, Chart is slow on my data, after several try and error find solution. first using “stats” to extract count, then use “xyseries”.
We have configured different services (cyberflows-sre,cybersec,cybervault...) in our server, in AppD metric browser those services are visible as no's (342,343,345,...) how to know (where to find) wh... See more...
We have configured different services (cyberflows-sre,cybersec,cybervault...) in our server, in AppD metric browser those services are visible as no's (342,343,345,...) how to know (where to find) which number resembles which service?
Hello, We're using PAN-OS 10.1.11 and Palo Alto Networks Add-on version 6.5.0.  Wants to upgrade Add-on to 8.1.1. Would like to know the PAN-OS supported by Palo Alto Networks Add-on version 8.1.1.... See more...
Hello, We're using PAN-OS 10.1.11 and Palo Alto Networks Add-on version 6.5.0.  Wants to upgrade Add-on to 8.1.1. Would like to know the PAN-OS supported by Palo Alto Networks Add-on version 8.1.1. Unable to locate this information from the Add-On release note or installation guide. Thanks and Rgds    
Hi All, I found this https://community.splunk.com/t5/Dashboards-Visualizations/9-0-5-ui-prefs-conf-Why-my-default-search-mode-in-search-page-on/m-p/652793 and in there is this. SplunkWeb users may ... See more...
Hi All, I found this https://community.splunk.com/t5/Dashboards-Visualizations/9-0-5-ui-prefs-conf-Why-my-default-search-mode-in-search-page-on/m-p/652793 and in there is this. SplunkWeb users may experience different behaviors for the UI preferences that used to persist and show latest preferences by updating ui-prefs.conf on the fly. Now after upgrade to 9.0.5+ or 9.1.0+ its behavior changed and no longer uses ui-prefs.conf to remember the user's UI level preferences, but instead, uses the url in the request or localStorage/Web Storage. In Firefox I found this webappsstore.sqlite in my ../Library/Application Support/Firefox/Profiles/e0fxb1hs.default-release which is similar to the above.  Is this where the ui-prefs.conf information was moved to?  I've had a request from a user that wants to set the 'Selected fields', but after the upgrade to 9.1.2 the changes would be stored in a sqlite DB.  Is this correct?  Is there any way of changing the 'Selected fields' other than using the backend?  Does this work for other apps beyond Search? TIA, Joe
Hi Folks, I'm running into trouble excluding new process creation events for Teams from being indexed. It's an expected application and starts at logon so we're not super worried about it. I've l... See more...
Hi Folks, I'm running into trouble excluding new process creation events for Teams from being indexed. It's an expected application and starts at logon so we're not super worried about it. I've looked at a handful of community articles, tried what was posted, and I'm stumped. My regex syntax looks fine, but Splunk still isn't excluding the events. Here's what I've tried so far: _____inputs.conf_____ blacklist3 = EventCode="4688" new_process_name=".*Teams.exe" blacklist3 = $XmlRegex="<EventID>4688<\/EventID>.*<Data Name='NewProcessName'>C:\\Users\\.*\\AppData\\Local\\Microsoft\\Teams\\current\\Teams\.exe<\/Data>" blacklist3 = $XmlRegex="<EventID>4688<\/EventID>.*<DataName='NewProcessName'>C:\\Users\\.*\\AppData\\Local\\Microsoft\\Teams\\current\\Teams\.exe<\/Data>" blacklist3 = EventCode="4688" $XmlRegex="Name=\'NewProcessName\'>C:\\Users\\.*\\AppData\\Local\\Microsoft\\Teams\\current\\Teams.exe<\/Data>" None of these have worked. I found a couple community articles saying props.conf and transforms.conf was the proper way to filter out events so I tried these as well: _____props.conf_____ [WinEventLog:Security] TRANSFORMS-null = 4688cleanup _____transforms.conf_____ [4688cleanup] REGEX = "Teams\.exe<\/Data>" DEST_KEY = queue FORMAT = nullQueue And this: _____transforms.conf_____ [4688cleanup] REGEX = <EventID>4688<\/EventID>.*<DataName='NewProcessName'>C:\\Users\\.*\\AppData\\Local\\Microsoft\\Teams\\current\\Teams\.exe<\/Data> DEST_KEY = queue FORMAT = nullQueue None of these have worked so far and I'd appreciate any input y'all have. Here is a copy of an event I'm trying to exclude from being indexed (Teams.exe as a new process): <Event xmlns='http:// schemas .microsoft .com/win/2004/08/events/event '><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-a5ba-3e3b0328c30d}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2024-02-21T22:11:25.7542758Z'/><EventRecordID>4096881</EventRecordID><Correlation/><Execution ProcessID='4' ThreadID='1124'/><Channel>Security</Channel><Computer>{Device_FQDN}</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S-1-1-11-111111111-111111111-1111111111-111111</Data><Data Name='SubjectUserName'>{user}</Data><Data Name='SubjectDomainName'>{Domain}</Data><Data Name='SubjectLogonId'>0x11111111</Data><Data Name='NewProcessId'>0x5864</Data><Data Name='NewProcessName'>C:\Users\{user}\AppData\Local\Microsoft\Teams\current\Teams.exe</Data><Data Name='TokenElevationType'>%%1936</Data><Data Name='ProcessId'>0x4604</Data><Data Name='CommandLine'></Data><Data Name='TargetUserSid'>S-1-0-0</Data><Data Name='TargetUserName'>-</Data><Data Name='TargetDomainName'>-</Data><Data Name='TargetLogonId'>0x0</Data><Data Name='ParentProcessName'>C:\Users\{user}\AppData\Local\Microsoft\Teams\current\Teams.exe</Data><Data Name='MandatoryLabel'>S-1-11-1111</Data></EventData></Event> And a copy of an event we'd like to keep (Teams.exe as a parent process, but not the new process): <Event xmlns='http:// schemas .microsoft .com/win/2004/08/events/event '><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-a5ba-3e3b0328c30d}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2024-02-21T22:33:19.5932251Z'/><EventRecordID>4212468</EventRecordID><Correlation/><Execution ProcessID='4' ThreadID='31196'/><Channel>Security</Channel><Computer>{Device_FQNDN</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S-1-1-11-111111111-111111111-1111111111-111111</Data><Data Name='SubjectUserName'>{user}</Data><Data Name='SubjectDomainName'>{Domain}</Data><Data Name='SubjectLogonId'>0x1111111</Data><Data Name='NewProcessId'>0x7664</Data><Data Name='NewProcessName'>C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe</Data><Data Name='TokenElevationType'>%%1936</Data><Data Name='ProcessId'>0x4238</Data><Data Name='CommandLine'></Data><Data Name='TargetUserSid'>S-1-0-0</Data><Data Name='TargetUserName'>-</Data><Data Name='TargetDomainName'>-</Data><Data Name='TargetLogonId'>0x0</Data><Data Name='ParentProcessName'>C:\Users\{user}\AppData\Local\Microsoft\Teams\current\Teams.exe</Data><Data Name='MandatoryLabel'>S-1-11-1111</Data></EventData></Event>     Events obfuscated for privacy. Like I said, the regex syntax looks fine as far as I can tell and matches in regex101 so I'm hoping it's a small thing I'm overlooking. We're running Splunk v9.1.1 if that makes any difference. Thanks! -SplunkUser5
Hello, I don't know how to simulate this using makeresults, but I have data over 10,000 (let say 50,000) If I sort descending using "| sort - 0 Score", it will only give me 10,000 rows, but I use... See more...
Hello, I don't know how to simulate this using makeresults, but I have data over 10,000 (let say 50,000) If I sort descending using "| sort - 0 Score", it will only give me 10,000 rows, but I used "| sort 0 Score desc", it will give me 50,000 rows. What is the different between using sort - and sort desc?    Why doesn't sort - only limit to 10,000?   Thank you so much  index=test | sort - 0 Score ==>   only 10,000  rows          I need to use "| sort Score desc"   Name Score Name1 5 Name2 0 Name3 7 Name4 0 ….   Name50000 9
Sure.. So, here it goes.. I have a dashboard that is tracking 'jobs'...  Completed jobs and this particular widget is tracking 'running' jobs (start but no end).  I might be tracking around 80 jobs ... See more...
Sure.. So, here it goes.. I have a dashboard that is tracking 'jobs'...  Completed jobs and this particular widget is tracking 'running' jobs (start but no end).  I might be tracking around 80 jobs but there should not be more than 5 or 6 'running' at any particular time.  So, not creating 80 transactions. Everything is working as designed but this one job that starts and ends at the same time showed up in my 'running' jobs widget and then is missing from my completed jobs widget. Once I run my initial 'search' for log events here is what im doing. index=anIndex sourcetype=aSourcetype (aJob1 OR aJob2 OR aJob3) AND ("START of script" OR "COMPLETED OK" OR "ABORTED, exiting with status" ) | rex field=_raw "Batch::(?<aJobName>[^\s]*)" | transaction keeporphans=true host aJobName startswith=("START of script") endswith=("COMPLETED OK" OR "ABORTED, exiting with status") | eval closed_txn = if ( isnull(closed_txn),0,closed_txn) | search closed_txn=0 | sort _time | eval aDay = strftime(_time, "%a. %b. %e, %Y") | eval aStartTime=strftime(_time, "%H:%M:%S %p") | eval aDuration=tostring((now()-_time), "duration") | eval aEndTime = "--- Running ---" | table aHostName aDay aJobName aStartTime aEndTime aDuration But, this one job is causing me issues as Transaction is not picking up the start/end that have the same _time
Yes, because KV store is not a time-series DB like the Splunk index effectively is. A KV store has no fixed _time field like there is for every event in a Splunk index - you define the fields in you... See more...
Yes, because KV store is not a time-series DB like the Splunk index effectively is. A KV store has no fixed _time field like there is for every event in a Splunk index - you define the fields in your collection, so you need to control what gets filtered. If you have a field called KV_entry_time, which is stored as an epoch, then you will need to convert your time picker selection to epoch start/end values and then  |inputlookup {collection_name} where KV_entry_time >= $time_picker_start$ AND KV_entry_time < $time_picker_end$  There is a trick to converting the time picker input to a start/end epoch value - you need a background search in the XML like this <search> <query> | makeresults | addinfo </query> <done> <set token="time_picker_start">$result.info_min_time$</set> <set token="time_picker_end">$result.info_max_time$</set> </done> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> </search> which will use addinfo to get the time picker's epoch values from info_*_time and then the token setting will convert those to the time_picker_* tokens you can use in the collection search.  Hope this helps  
And same in 2024.  Thank you!
This statement   | stats values(index) as index by InstanceId   should certainly give you a field called index which will contain main/other or both Doing  | stats values(*) as * dc(index) as i... See more...
This statement   | stats values(index) as index by InstanceId   should certainly give you a field called index which will contain main/other or both Doing  | stats values(*) as * dc(index) as index_count by InstanceId would give you all the values of every field from both indexes and a field called index_count that would contain a 1 or 2 You can't match the resource id against the instanceid as the events are not yet "joined" together, so there will either be a ResourceId (from index=main) OR an InstanceId (from index=other), so the coalesce+stats will join the two datasets together on that now common field (due to coalesce). Effectively what you are saying is that after the stats, it will show, for each InstanceId (where InstanceId has come from ResourceId in index=main), the values of the indexes those IDs were found in. After the stats you can then match as needed, so I believe what you are trying to do is to then say  "I need to only show results, where a ResourceId from index=main has also been found as InstanceId from index=other. So, the logic to decide that is mvcount(index)=2 (this means it was in both indexes). You could use index_count from the dc(index) example above = that is the same as doing the mvcount. Doing values(*) as * is simply a way to carry through all fields combined from both indexes when joining the data together - as you have tried the stats values(index) as index... that should simply carry forward the main+other to that field. Can you given an example of the data you have in both and a search result that highlights what you are getting.
Thank you for your kind response, I am getting 10 detections if there are10 rows in the result But the average time to detect should be an average of all the time differences from 1 alert mean time. ... See more...
Thank you for your kind response, I am getting 10 detections if there are10 rows in the result But the average time to detect should be an average of all the time differences from 1 alert mean time.  Please find the attached screenshot for more information.  Splunk alert splunk_attack_1 triggered 2 times, i want to take the avg of time and display only one result with difference.  Sample result  _time search_name    event time Hour at Source Mean Time to Detect 2/5/2024 19:47:10       Splunk_Attack_1 2/5/2024 17:47:10       2 Hr 3 Min 19 Secs.000000   2/5/2024 19:20:10       Splunk_Attack_1 2/5/2024 17:20:10       2 Hr 7 Min 18 Secs.000000   2/5/2024 19:30:35       Splunk_Attack_2 2/5/2024 18:30:35       1 Hr 37 Min 12 Secs.000000   2/5/2024 18:20:15       Splunk_Attack_2 2/5/2024 18:20:15       1 Hr 26 Min 15 Secs.000000   2/6/2024 18:05:15       Splunk_Attack_2 2/6/2024 18:05:15       1 Hr 26 Min 15 Secs.000000   2/7/2024 16:55:15       Splunk_Attack_3 2/7/2024 14:55:15       2 Hr 0 Min 18 Secs.000000   2/8/2024 16:35:15       Splunk_Attack_3 2/8/2024 14:35:15       2 Hr 20 Min 18 Secs.000000   2/9/2024 16:10:15       Splunk_Attack_3 2/9/2024 14:10:15       2 Hr 40 Min 18 Secs.000000    Expected Result  _time search_name    event time Hour at Source Mean Time to Detect 2/5/2024 19:47:10       Splunk_Attack_1 2/5/2024 17:47:10       2 Hr 3 Min 19 Secs.000000   2/5/2024 19:20:10       Splunk_Attack_2 2/5/2024 17:20:10       2 Hr 7 Min 18 Secs.000000   2/5/2024 19:30:35       Splunk_Attack_3 2/5/2024 18:30:35       1 Hr 37 Min 12 Secs.000000    
Eighty transactions of up to an hour is a new requirement that my previous suggestion will not handle.  The transaction command is pretty inefficient and will become less so when it has to track man... See more...
Eighty transactions of up to an hour is a new requirement that my previous suggestion will not handle.  The transaction command is pretty inefficient and will become less so when it has to track many transactions over a long time range. Rather than help you with a specific, sub-optimal solution, let's see if there's another solution to the problem.  What problem are you trying to solve?