All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Logs mentions this 08-27-2024 13:00:20.824 +0800 INFO TailingProcessor [32248 MainTailingThread] - Parsing configuration stanza: monitor://D:\temp\zkstats.json.
[sourcetype] KV_MODE = json INDEXED_EXTRACTIONS = json This is my props.conf I am not able to actually get data in to even consider the crcsalt source
When i want to enable use case "ESCU - Windows Gather Victim Host Information Camera - Rule" the query in corellation search is like this  index=* source=WinEventLog:Microsoft-Windows-PowerShell... See more...
When i want to enable use case "ESCU - Windows Gather Victim Host Information Camera - Rule" the query in corellation search is like this  index=* source=WinEventLog:Microsoft-Windows-PowerShell/Operational OR source="XmlWinEventLog:Microsoft-Windows-PowerShell/Operational" EventCode=* ScriptBlockText= "* Win32_PnPEntity *" ScriptBlockText= "*SELECT*" ScriptBlockText= "*WHERE*" ScriptBlockText = "*PNPClass*" ScriptBlockText IN ("*Image*", "*Camera*") | stats count min(_time) as firstTime max(_time) as lastTime by EventCode ScriptBlockText Computer UserID | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_gather_victim_host_information_camera_filter`    From this query it calling ScriptBlockText field but when i check it in all fields i only can find ScriptBlock_ID fields   Question is how i can get field ScriptBlockText ? because when i explore more Use Case too much Correlation Search using ScriptBlockText field.   Thankyou
Is there a way to reference or combine multiple fields into a single name so that it can be referenced by that new name? For example:   somefield IN (a,b,c,d) If I  run  a query for "somefield" I g... See more...
Is there a way to reference or combine multiple fields into a single name so that it can be referenced by that new name? For example:   somefield IN (a,b,c,d) If I  run  a query for "somefield" I get "a", "b", "c", "d" returned. I want to be able to refer to "somefield" by a single name.  Is that possible? So if run a query for "somefield", I would get the aggregate results of a,b,c,d ?
Thankyou for all information Hope you have a nice day sir Ciao. Zake
It would help to know what you've tried so far so we don't waste your time. Have you tried this regex?  It looks for a pipe, some digits, another pipe, and then the desired field (up to the followin... See more...
It would help to know what you've tried so far so we don't waste your time. Have you tried this regex?  It looks for a pipe, some digits, another pipe, and then the desired field (up to the following pipe). \|\d+\|(?<field>[^\|]+)  
What is the plan to have the Data Manager AWS Metadata input support all the sources that are supported by the Splunk Add-on for AWS?
.
I completely agree and love that site too!  One thing about splunk regex is it can be a little different and having the context of your existing search as well as indexes would be invaluable to add t... See more...
I completely agree and love that site too!  One thing about splunk regex is it can be a little different and having the context of your existing search as well as indexes would be invaluable to add to suggested regex not to mention allowing a UI to test out on your existing data sets.
Until then, regex101.com is a good site for testing regular expressions, or just ask here or on slack #regex
Assuming note and src are already extracted, then try something like this | eventstats values(eval(if(note="ACCESS BLOCK","BLOCKED",null()))) as blocked by src | where isnull(blocked)
You can use appendpipe to add a row using the current search pipeline context: | makeresults format=json data="[{\"device\":\"foo\", \"user_numbers\":19}, {\"device\":\"foo\", \"user_numbers\":3... See more...
You can use appendpipe to add a row using the current search pipeline context: | makeresults format=json data="[{\"device\":\"foo\", \"user_numbers\":19}, {\"device\":\"foo\", \"user_numbers\":39}, {\"device\":\"bar\", \"user_numbers\":39}, {\"device\":\"foo\", \"user_numbers\":44}]" | eventstats dc(user_numbers) as overall_distinct_user_count | stats dc(user_numbers) as distinct_users_for_device, first(overall_distinct_user_count) as overall_distinct_user_count by device | appendpipe [stats max(overall_distinct_user_count) as overall_distinct_user_count | eval device = "All Devices" ]  
Is is possible to get it to the last row??? 
You could do something like this using eventstats: | makeresults format=json data="[{\"device\":\"foo\", \"user_numbers\":19}, {\"device\":\"foo\", \"user_numbers\":39}, {\"device\":\"bar\", \"user_... See more...
You could do something like this using eventstats: | makeresults format=json data="[{\"device\":\"foo\", \"user_numbers\":19}, {\"device\":\"foo\", \"user_numbers\":39}, {\"device\":\"bar\", \"user_numbers\":39}, {\"device\":\"foo\", \"user_numbers\":44}]" | eventstats dc(user_numbers) as overall_distinct_user_count | stats dc(user_numbers), first(overall_distinct_user_count) as overall_distinct_user_count by device  
So, I want the the distinct count of user_numbers by device, but in the same chat/table, I want the distinct count of all the user_numbers in the last column  called total is this possible to get a s... See more...
So, I want the the distinct count of user_numbers by device, but in the same chat/table, I want the distinct count of all the user_numbers in the last column  called total is this possible to get a stats formula with different fields in the by?  This is some thing that I have: | stats dc(user_numbers) by device but I also want to show in the same table as the total: | stats dc(user_numbers) right now I count duplicates and show this:  | addcoltotals label="Total Members" labelfield=device I really hope this is possible! Thank you!
Ai to assist in creating valid regex expressions would be super helpful.
We are on Splunk Cloud 9.1 Has anyone successfully been able to ingest data from sendgrid into splunk? It looks like the only option they have is for a webhook that requires a URL to send to.  I ... See more...
We are on Splunk Cloud 9.1 Has anyone successfully been able to ingest data from sendgrid into splunk? It looks like the only option they have is for a webhook that requires a URL to send to.  I am no Splunk wizard so I may just be missing the easy answer here. But I can't find a way to generate a URL for SendGrid to sent into Splunk Cloud.
What would be a better method to find events where the message field contains " new_state: Diagnostic, old_state: Home" as opposed to wildcards? I am looking for the events directly chronologically... See more...
What would be a better method to find events where the message field contains " new_state: Diagnostic, old_state: Home" as opposed to wildcards? I am looking for the events directly chronologically after the keystone event. that is a time stamp more recent than the keystone event? How would I structure this streamstats command in the rest of my query? That is, there is separate criteria which the data events must meet in order to be viable
Thanks for the reply. Regarding forwarding to two output groups, is there some doc that describes this in detail? In any case, given that we have multiple app log files and script output already goin... See more...
Thanks for the reply. Regarding forwarding to two output groups, is there some doc that describes this in detail? In any case, given that we have multiple app log files and script output already going to one of the enterprise servers, how difficult will it be to segregate which log files go to which Enterprise server?  
OK. There are some thing highly suboptimal with your search (especially the use of wildcards). But I'm not digging into it at the moment. Also be aware that "next" might not mean the same for everyo... See more...
OK. There are some thing highly suboptimal with your search (especially the use of wildcards). But I'm not digging into it at the moment. Also be aware that "next" might not mean the same for everyone so you should be precise when specifying your problem. By default Splunk returns data in reverse chronological order so Splunk's "next" event will actually be a previous event time-wise. Anyway, the way to match something and some subsequent events (again - in Splunk's order - you might want to reverse or sort your events before doing so) is to use streamstats command with count function and change_on parameter and then simply filter on events only having that count value below given threshold.