All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @mackey , this solution is if you don't have Enterprise Security. If you have ES, you can add your IOC list to the threat intelligence lookups. Ciao. Giuseppe
You can also check out two nice commands - xyseries and untable which can be used to (de)tabularize such data series.
Ok, the only thing we know for sure is that for this particular event timestamp has not been extracted from the event itself. There can be several reasons for it: 1) Props for this sourcetype, sourc... See more...
Ok, the only thing we know for sure is that for this particular event timestamp has not been extracted from the event itself. There can be several reasons for it: 1) Props for this sourcetype, source or host specify assuming ingestion time, not the event time 2) Timestamp format for extraction is wrongly defined and doesn't match the event 3) The event is ingested with a method bypassing timestamp extraction (HEC /event endpoint) 4) Timestamp has been extracted but was out of limits so Splunk assumed timestamp from previous event (but that's relatively unlikely, you'd probably either see many events with the same timestamp or mostly well-extracted time and single exceptions). This can be connected with 2). 5) You have another timestamp within your event which Splunk extracts time from (but I suppose you'd notice that). Usually the most probable causes are 2, 1 and 3 (in order of frequency).
Hi @mackey  Is your Org using Enterprise Security of Splunk?
Hi @tbessie , as also @sainag_splunk said, maybe there's a timestamp extraction error. Could you share some sample of your events and the props.conf related to the sourcetype of these events? Ciao... See more...
Hi @tbessie , as also @sainag_splunk said, maybe there's a timestamp extraction error. Could you share some sample of your events and the props.conf related to the sourcetype of these events? Ciao. Giuseppe
Hi @mwolfe , don't use sum but count: index=web uri_path="/somepath" status="200" OR status="400" | rex field=useragent "^(?<app_name>[^/]+)/(?<app_version>[^;]+)?\((?<app_platform>[^;]+); *" | ev... See more...
Hi @mwolfe , don't use sum but count: index=web uri_path="/somepath" status="200" OR status="400" | rex field=useragent "^(?<app_name>[^/]+)/(?<app_version>[^;]+)?\((?<app_platform>[^;]+); *" | eval app=app_platform+" "+app_name+" "+app_version | eval success=if(status=200,1,0) | eval failure=if(status=400,1,0) | stats count(failure) AS fail_count count(success) AS success_count BY app | eval success_rate=round((success_count / (success_count + fail_count))*100,1) | table app success_rate otherwise, you could insert the eval in the stats: index=web uri_path="/somepath" status="200" OR status="400" | rex field=useragent "^(?<app_name>[^/]+)/(?<app_version>[^;]+)?\((?<app_platform>[^;]+); *" | eval app=app_platform+" "+app_name+" "+app_version | stats count(eval(status=400)) AS fail_count count(eval(status=200)) AS success_count BY app | eval success_rate=round((success_count / (success_count + fail_count))*100,1) | table app success_rate Ciao. Giuseppe
Hi @mackey , if you have these IOCs in a lookup table you can run a very simple search: if your lookup is called my_ioc.csv and the ip list is in a column alled ip, you could run: index=* [ | inpu... See more...
Hi @mackey , if you have these IOCs in a lookup table you can run a very simple search: if your lookup is called my_ioc.csv and the ip list is in a column alled ip, you could run: index=* [ | inputlookup my_ioc.csv | rename ip AS query | fields query ] in this way you execute a search for all the ips listed in your lookup in full text search on all your events. If instead you want to search these ips in pre-defined fields, you have only to change the field name in the subsearch, es. if you want to search in the src field, you could run: index=* [ | inputlookup my_ioc.csv | rename ip AS src | fields src ] Ciao. Giuseppe
It is difficult to advise without seeing your events. Please share some anonymised events which demonstrate the issue. Please share the raw event in a code block (using the </> button above) to prese... See more...
It is difficult to advise without seeing your events. Please share some anonymised events which demonstrate the issue. Please share the raw event in a code block (using the </> button above) to preserve formatting.
Try this | rex max_match=0 field=tags "(?<namevalue>[^:,]+:[^, ]+)" | mvexpand namevalue | rex field=namevalue "(?<name>[^:]+):(?<value>.*)" | eval {name}=value
We deal with hundreds of iocs ( mostly flagged IP's) that come in monthly, and we need to check them for hits in our network. We do not want to continue using summary search one at a time. Is it poss... See more...
We deal with hundreds of iocs ( mostly flagged IP's) that come in monthly, and we need to check them for hits in our network. We do not want to continue using summary search one at a time. Is it possible to use lookup table ( or any other way) to search hundreds at a time or does this have to be done one at a time. I am very new to splunk and still learning. I am needing to see if we have had any traffic from these or to these IP's. 
Hi, i made changes on my indexer storage but when i see on monitoring console part disk usage, the value is negative. Have anyone face this?. I already refresh the asset with monitoring console refre... See more...
Hi, i made changes on my indexer storage but when i see on monitoring console part disk usage, the value is negative. Have anyone face this?. I already refresh the asset with monitoring console refresh and restart the instance but nothing changed.  
Idk where to ask, that's why i'm asking here. And still don't know how to solve this issue.  I'm just Path Finder splunk and don't have access to open ticket to Splunk principle, maybe it can be sol... See more...
Idk where to ask, that's why i'm asking here. And still don't know how to solve this issue.  I'm just Path Finder splunk and don't have access to open ticket to Splunk principle, maybe it can be solved if you have Splunk Principle. 
I think I got it  | eval success=if(status=200,1,0) | eval failure=if(status=400,1,0) | stats sum(failure) as fail_sum, sum(success) as success_sum by app | eval success_rate=round((success_sum / (s... See more...
I think I got it  | eval success=if(status=200,1,0) | eval failure=if(status=400,1,0) | stats sum(failure) as fail_sum, sum(success) as success_sum by app | eval success_rate=round((success_sum / (success_sum + fail_sum))*100,1) | table app, success_rate
Thanks - this is very close to what I'm looking for (I do want to perform this extraction at search time), but may need a couple tweaks. 1) All of the dept's have a space in them (some more than one... See more...
Thanks - this is very close to what I'm looking for (I do want to perform this extraction at search time), but may need a couple tweaks. 1) All of the dept's have a space in them (some more than one)and the rex is only picking up the first word of that dept. Examples: "support services", "xyz operations r&d" 2) Also - when I look into each event to see that the Tags fields are extracted,  only one actually gets extracted. But it's not the same one each time?? The "name" and "namevalue" fields match the one field that does get extracted. Hope that makes sense?    
I've got data so: "[clientip]  [host] - [time] [method] [uri_path] [status] [useragent]" ..   and do the following search:   index=web uri_path="/somepath" status="200" OR status="400" | rex f... See more...
I've got data so: "[clientip]  [host] - [time] [method] [uri_path] [status] [useragent]" ..   and do the following search:   index=web uri_path="/somepath" status="200" OR status="400" | rex field=useragent "^(?<app_name>[^/]+)/(?<app_version>[^;]+)?\((?<app_platform>[^;]+); *" | eval app=app_platform+" "+app_name+" "+app_version   I've split up the useragent just fine and verified the output. I want to now compare status  by "app". So I've added the following:   | stats count by app, status   Which gives me: app status count android app 1.0 200 5000 ios app 2.0 400 3 android app 1.1 200 500 android app 1.0 400 12 ios app 2.0 200 3000 How can I compare, for a given "app" (combo of platform, name, version) the rate of success where success is when the response = 200 and failure if 400. I understand that I need to take success and divide by success + failure count.. But how do I combine this data?  Also note that I need to consider that some apps may not have any 400 errors. 
It was worked to me! Thanks a lot! 
Did you manage to find resolution to this issue. I am also facing same issues
What do you mean by need to switch with config back to the tcp method? How did you do that? after this change do you see it listen to port 8089? netstat -pant | egrep 8089  - do you see listen ?
Hi @WUShon Have you tried mapping.fieldColors ?  refer:https://docs.splunk.com/Documentation/Splunk/9.0.1/Viz/PanelreferenceforSimplifiedXML. Please check dashboard studio  for more options.  ... See more...
Hi @WUShon Have you tried mapping.fieldColors ?  refer:https://docs.splunk.com/Documentation/Splunk/9.0.1/Viz/PanelreferenceforSimplifiedXML. Please check dashboard studio  for more options.        If this Helps, Please Upvote.
Just filter with | where _time>=now()-86400 (Or whatever time limit you need) before you remove the _time field with the table command.