All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks!
Hi @pedropiin  You could try something like the following, I have used some makeresults to visualise this as I dont have your data. | makeresults count=100 | streamstats count as var1 | eval N=CASE... See more...
Hi @pedropiin  You could try something like the following, I have used some makeresults to visualise this as I dont have your data. | makeresults count=100 | streamstats count as var1 | eval N=CASE(var1>180,180,var1>120,120,var1>60,60,var1>30,30,var1>15,15,var1>10,10) | eval count{N}=1 | fields - count | stats sum(count*) AS count* | fillnull value=0 count10 count15 count30 count60 count120 count180 | eval count10=count10+count15+count30+count60+count120+count180 | eval count15=count15+count30+count60+count120+count180 | eval count30=count30+count60+count120+count180 | eval count60=count60+count120+count180 | eval count120=count120+count180 This assumes you want count10 to include anything where var1 is over 10, even if its also over 30. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Try to avoid using the transaction command because it's very non-performant.  Try this, instead.  Search for all Create and Close events then keep only the most recent for each alert.id/alert.message... See more...
Try to avoid using the transaction command because it's very non-performant.  Try this, instead.  Search for all Create and Close events then keep only the most recent for each alert.id/alert.message pair.  Throw out the Close events and what's left will be Creates without a Close. index=foo ("Create" OR "Close") ```Select the most recent event for each id/message ``` | dedup alert.id alert.message ```Discard the Close events``` | where NOT "Close"
Hi @tdavison76  I think you might be able to achieve this by adding an 'AND _time <= relative_time(now(), "-1y@y")' to your search (adjusting the data accordingly) so that you ignore old events wher... See more...
Hi @tdavison76  I think you might be able to achieve this by adding an 'AND _time <= relative_time(now(), "-1y@y")' to your search (adjusting the data accordingly) so that you ignore old events where the created event is missing because it has aged out. I would also look to change your search to not use the transaction command, which is very resource intensive and has limitations, instead you could use/adapt the following to get similar outputs: index=YourIndex earliest=-1y latest=now alert.message IN ("Create","Close") | eval {alert.message}=1 ``` or use | eval Create=IF(alert.message=="Create",1,0) Close=IF(alert.message=="Close",1,0) ``` | stats earliest(_time) as start_time, latest(_time) as end_time, sum(Create) as isCreate, sum(Close) as isClose | where isClose=0 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
@pedropiin wrote: But I'm aware this is definitely not the optimal way as, to my understanding, this will go through all the instances and count the ones > 10, then will go through all the instan... See more...
@pedropiin wrote: But I'm aware this is definitely not the optimal way as, to my understanding, this will go through all the instances and count the ones > 10, then will go through all the instances again counting the ones > 15 and so on.  I'm not convinced this is correct.  Have you looked at the job inspector stats for this search?  I think you'll find it's not that inefficient.  Any attempt to "chain" filters is likely to perform much worse.
Hi everyone I just started working with Splunk and I have a query in which one of the steps is to count the number of instances where a certain field has value > 10. But I have to count the number... See more...
Hi everyone I just started working with Splunk and I have a query in which one of the steps is to count the number of instances where a certain field has value > 10. But I have to count the number of instances with value > 10, > 15, > 30, > 60, > 120 and > 180. The way I'm doing it now is just by executing different counts, just as the following: <search>... | eval var1=... | stats count(eval(var1 > 10)) as count10, count(eval(var1 > 15)) as count15, count(eval(var1 > 30)) as count30, count(eval(var1 > 60)) as count60, count(eval(var1 > 120)) as count120, count(eval(var1 > 180)) as count180 ... But I'm aware this is definitely not the optimal way as, to my understanding, this will go through all the instances and count the ones > 10, then will go through all the instances again counting the ones > 15 and so on.  How would I execute this count making use of the fact that, e.g., to count the number of instances > 120, I can check only considering the set of instances > 60 and so on? That is, how do I chain these counts and use them as "filters"?  It's important to note that I don't want to use "where var1 > 10" multiple times as I also need to compute other metrics related to the whole dataset (e.g., avg(var1)) and, to my understanding, using just one  | stats count(eval(var > 10)) as count10 will "drop" all of the other columns of my query. Anyways, how would I do this? Thank you in advance.
Hello, I really appreciate any help on this one, I can't figure it out.  I am using the following to show only the "Create" events that don't have a corresponding "Close" event.   | transaction "al... See more...
Hello, I really appreciate any help on this one, I can't figure it out.  I am using the following to show only the "Create" events that don't have a corresponding "Close" event.   | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true | where closed_txn=0 This works, but, the search is running for "All Time", and we only keep events up to 1 yr.  I've ran into the issue of once one of the "Create" events reach that 1 yr and is deleted.  The "Close" event will make it appear in the Search results. I'm not sure why a "Close" event without a corresponding "Create" event would be counted, or how I can prevent if a single "Create" or "Close" event from being returned once one of the events have been deleted or is beyond the Search time frame selected. Any ideas on this one? Thanks for any help, you will save me some sleepless nights. Tom  
Hi @splunklearner , no, the Load Balancer gives you the condition that you don't lose any logs even if one receiver is down, it's the first condition for HA, but it doesn't give any feature aboud du... See more...
Hi @splunklearner , no, the Load Balancer gives you the condition that you don't lose any logs even if one receiver is down, it's the first condition for HA, but it doesn't give any feature aboud duplicatibg logs. The only solution is the one I described. Ciao. Giuseppe
Hi @Sultan77 , you have two choices: create a lookup (called e.g. perimeter.csv and containing at list only one field: "host") containing the list of hosts to monitor and run a search like the foll... See more...
Hi @Sultan77 , you have two choices: create a lookup (called e.g. perimeter.csv and containing at list only one field: "host") containing the list of hosts to monitor and run a search like the following: | tstats count where index=* earliest=-2h latest=now BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 otherwise, if you don't want to create and manage the lookup, you could check if an host sent logs e.g. in the last 30 days but not in the last 2 hours: | tstats count latest(_time) AS _time where index=* earliest=-30d latest=now BY host | where _time<(now()-7200 the second search requires less maintenance but gives you less control. Ciao. Giuseppe
Splunk is not good at finding things which aren't there - normally you need to give it a list of what to expect and then check to see which of those are there. For example, you could create a list of... See more...
Splunk is not good at finding things which aren't there - normally you need to give it a list of what to expect and then check to see which of those are there. For example, you could create a list of hosts that are normally sending events to Splunk and count the events from those hosts over a period of time. Any hosts which don't have events may have stopped sending events.
Good day everyone. I am trying to monitor the environment hosts whether if any stopped sending logs. The challenge here to make through content management > correlation search. So it can be schedu... See more...
Good day everyone. I am trying to monitor the environment hosts whether if any stopped sending logs. The challenge here to make through content management > correlation search. So it can be scheduled every ex: 2 hours. any idea?
@gcusello can deploying load balancer between syslog servers help us to get rid of same log ingesting in 2 syslog servers?
The logic for the groupings is that service 1 and 2 share the same servers. service 3 uses different servers. Therefore if there was an issue with those servers we could see how many services would b... See more...
The logic for the groupings is that service 1 and 2 share the same servers. service 3 uses different servers. Therefore if there was an issue with those servers we could see how many services would be affected. I am trying to abridge the data but still show those specific dependencies between services where they have a lot of shared assets.
This is an example of the dashboard using the groupings based on colours. The first panel without grouping, second one is with. dashboard version="1.1" theme="light"> <label>Network Viz Groupings T... See more...
This is an example of the dashboard using the groupings based on colours. The first panel without grouping, second one is with. dashboard version="1.1" theme="light"> <label>Network Viz Groupings Test</label> <row> <panel> <title>Network Viz No Groups</title> <viz type="network-diagram-viz.network-diagram-viz"> <search> <query>| makeresults | eval _raw=" 'Child Class','Parent Class','from','to' Database,Service,Service1,Database1 Database,Service,Service3,Database1 Database,Service,Service3,Database2 Network,Server,Server3,Network1 Network,Server,Server4,Network1 Server,Server,Server1,Server2 Server,Server,Server2,Server3 Server,Service,Service1,Server3 Server,Service,Service2,Server2 Server,Service,Service3,Server4 Service,Service,Service1,Service2 " | multikv forceheader=1 | fields - _raw, _time, linecount | rename "Parent_Class_" as "Parent Class", "Child_Class_" as "Child Class", from_ as from, to_ as to ```Logic used for color grouping``` | eval color=case('Parent Class'=="Service", "red", 'Parent Class'=="Server", "green",0==0, black)</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> <row> <panel> <title>Network Viz Grouped By Colour</title> <viz type="network-diagram-viz.network-diagram-viz"> <search> <query>| makeresults | eval _raw=" 'Child Class','Parent Class','from','to' Database,Service,Service1,Database1 Database,Service,Service3,Database1 Database,Service,Service3,Database2 Network,Server,Server3,Network1 Network,Server,Server4,Network1 Server,Server,Server1,Server2 Server,Server,Server2,Server3 Server,Service,Service1,Server3 Server,Service,Service2,Server2 Server,Service,Service3,Server4 Service,Service,Service1,Service2 " | multikv forceheader=1 | fields - _raw, _time, linecount | rename "Parent_Class_" as "Parent Class", "Child_Class_" as "Child Class", from_ as from, to_ as to ```Logic used for color grouping``` | eval color=case('Parent Class'=="Service", "red", 'Parent Class'=="Server", "green",0==0, black)</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="network-diagram-viz.network-diagram-viz.arrowLocation">none</option> <option name="network-diagram-viz.network-diagram-viz.canZoom">true</option> <option name="network-diagram-viz.network-diagram-viz.clusterBy">color</option> <option name="network-diagram-viz.network-diagram-viz.defaultLinkLength">100</option> <option name="network-diagram-viz.network-diagram-viz.defaultNodeType">circle</option> <option name="network-diagram-viz.network-diagram-viz.draggableNodes">true</option> <option name="network-diagram-viz.network-diagram-viz.drilldownClick">singleOrDouble</option> <option name="network-diagram-viz.network-diagram-viz.enablePhysics">true</option> <option name="network-diagram-viz.network-diagram-viz.hierarchy">false</option> <option name="network-diagram-viz.network-diagram-viz.hierarchyDirection">Top-Down</option> <option name="network-diagram-viz.network-diagram-viz.hierarchySortMethod">directed</option> <option name="network-diagram-viz.network-diagram-viz.levelSeparation">150</option> <option name="network-diagram-viz.network-diagram-viz.linkTextLocation">bottom</option> <option name="network-diagram-viz.network-diagram-viz.linkTextSize">medium</option> <option name="network-diagram-viz.network-diagram-viz.missingImageURL">/static/app/network-diagram-viz/customimages/404.gif</option> <option name="network-diagram-viz.network-diagram-viz.nodeSpacing">100</option> <option name="network-diagram-viz.network-diagram-viz.nodeTextSize">medium</option> <option name="network-diagram-viz.network-diagram-viz.physicsModel">forceAtlas2Based</option> <option name="network-diagram-viz.network-diagram-viz.shakeTowards">roots</option> <option name="network-diagram-viz.network-diagram-viz.smoothEdgeType">dynamic</option> <option name="network-diagram-viz.network-diagram-viz.smoothEdges">true</option> <option name="network-diagram-viz.network-diagram-viz.tokenNode">nd_node_token</option> <option name="network-diagram-viz.network-diagram-viz.tokenToNode">nd_to_node_token</option> <option name="network-diagram-viz.network-diagram-viz.tokenToolTip">nd_tooltip_token</option> <option name="network-diagram-viz.network-diagram-viz.tokenValue">nd_value_token</option> <option name="network-diagram-viz.network-diagram-viz.wrapNodeText">true</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> </dashboard><   
Hi @sureshkumaar  Are your events across multiple lines? You might have more success with the following transform [setParsing] INGEST_EVAL = queue=IF(match(_raw, "systemd|rsyslogd|auditd"),queue,"... See more...
Hi @sureshkumaar  Are your events across multiple lines? You might have more success with the following transform [setParsing] INGEST_EVAL = queue=IF(match(_raw, "systemd|rsyslogd|auditd"),queue,"nullQueue") Then in your props.conf refer to this for your sourcetype [yourSourcetype] TRANSFORMS-filter1 = setParsing This will set the queue depending on a match within the IF statement   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Maybe you can share your current search that does this grouping. What's your logic for deciding that server 2 and server 3 need to be in a different server group to server 4, is it simply on the pre... See more...
Maybe you can share your current search that does this grouping. What's your logic for deciding that server 2 and server 3 need to be in a different server group to server 4, is it simply on the presence of a connection between those 2 services? @danspav 
Feb 3 11:10:15 server-server-server-server systemd[1]: Removed slice User Slice of UID 0. Feb 3 04:14:23 server-server-server-server rsyslogd[679024]: imjournal: 16021 messages lost due to rate-limi... See more...
Feb 3 11:10:15 server-server-server-server systemd[1]: Removed slice User Slice of UID 0. Feb 3 04:14:23 server-server-server-server rsyslogd[679024]: imjournal: 16021 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 3 11:01:01 server-server-server-server CROND[3905399]: (root) CMDEND (run-parts /etc/cron.hourly) Feb 3 11:10:55 server-server-server-server esfdaemon[3938104]: 0 Feb 3 10:24:36 server-server-server-server auditd[2689]: Audit daemon rotating log files Is there a way to capture the whole line where systemd, rsyslogd and auditd keyword matches using props.conf and transforms.conf? Below captures till the specific keyword, how about remaining lines after the keyword? [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = ^\w{3}\s\s\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}\s+(?:[+\-A-Z0-9]*\s+)?(systemd|rsyslogd|auditd) DEST_KEY = queue FORMAT = indexQueue  
As I said, assuming your events have already been ingested as JSON. It looks like they aren't, or at least the fields you need aren't. Try this | spath all_request_headers | fields _time all_request... See more...
As I said, assuming your events have already been ingested as JSON. It looks like they aren't, or at least the fields you need aren't. Try this | spath all_request_headers | fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers
| fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers   Giving this search after my index and sourcetype, it is showing nothing in events, can... See more...
| fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers   Giving this search after my index and sourcetype, it is showing nothing in events, can you please help @ITWhisperer  
Thanks for providing the raw example. It looks like some of the header fields have quite high entropy, meaning that it could create a lot of values for a dashboard/table. Are they wanting to see rar... See more...
Thanks for providing the raw example. It looks like some of the header fields have quite high entropy, meaning that it could create a lot of values for a dashboard/table. Are they wanting to see rare or most frequent values for these headers, perhaps?  Presume there are some headers such as "Date" which arent going to add much value? As previously mentioned - I think its important to understand the purpose of the dashboard, otherwise the panels created might be meaningless and a waste of search time. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will