All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@pedropiin wrote: But I'm aware this is definitely not the optimal way as, to my understanding, this will go through all the instances and count the ones > 10, then will go through all the instan... See more...
@pedropiin wrote: But I'm aware this is definitely not the optimal way as, to my understanding, this will go through all the instances and count the ones > 10, then will go through all the instances again counting the ones > 15 and so on.  I'm not convinced this is correct.  Have you looked at the job inspector stats for this search?  I think you'll find it's not that inefficient.  Any attempt to "chain" filters is likely to perform much worse.
Hi everyone I just started working with Splunk and I have a query in which one of the steps is to count the number of instances where a certain field has value > 10. But I have to count the number... See more...
Hi everyone I just started working with Splunk and I have a query in which one of the steps is to count the number of instances where a certain field has value > 10. But I have to count the number of instances with value > 10, > 15, > 30, > 60, > 120 and > 180. The way I'm doing it now is just by executing different counts, just as the following: <search>... | eval var1=... | stats count(eval(var1 > 10)) as count10, count(eval(var1 > 15)) as count15, count(eval(var1 > 30)) as count30, count(eval(var1 > 60)) as count60, count(eval(var1 > 120)) as count120, count(eval(var1 > 180)) as count180 ... But I'm aware this is definitely not the optimal way as, to my understanding, this will go through all the instances and count the ones > 10, then will go through all the instances again counting the ones > 15 and so on.  How would I execute this count making use of the fact that, e.g., to count the number of instances > 120, I can check only considering the set of instances > 60 and so on? That is, how do I chain these counts and use them as "filters"?  It's important to note that I don't want to use "where var1 > 10" multiple times as I also need to compute other metrics related to the whole dataset (e.g., avg(var1)) and, to my understanding, using just one  | stats count(eval(var > 10)) as count10 will "drop" all of the other columns of my query. Anyways, how would I do this? Thank you in advance.
Hello, I really appreciate any help on this one, I can't figure it out.  I am using the following to show only the "Create" events that don't have a corresponding "Close" event.   | transaction "al... See more...
Hello, I really appreciate any help on this one, I can't figure it out.  I am using the following to show only the "Create" events that don't have a corresponding "Close" event.   | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true | where closed_txn=0 This works, but, the search is running for "All Time", and we only keep events up to 1 yr.  I've ran into the issue of once one of the "Create" events reach that 1 yr and is deleted.  The "Close" event will make it appear in the Search results. I'm not sure why a "Close" event without a corresponding "Create" event would be counted, or how I can prevent if a single "Create" or "Close" event from being returned once one of the events have been deleted or is beyond the Search time frame selected. Any ideas on this one? Thanks for any help, you will save me some sleepless nights. Tom  
Hi @splunklearner , no, the Load Balancer gives you the condition that you don't lose any logs even if one receiver is down, it's the first condition for HA, but it doesn't give any feature aboud du... See more...
Hi @splunklearner , no, the Load Balancer gives you the condition that you don't lose any logs even if one receiver is down, it's the first condition for HA, but it doesn't give any feature aboud duplicatibg logs. The only solution is the one I described. Ciao. Giuseppe
Hi @Sultan77 , you have two choices: create a lookup (called e.g. perimeter.csv and containing at list only one field: "host") containing the list of hosts to monitor and run a search like the foll... See more...
Hi @Sultan77 , you have two choices: create a lookup (called e.g. perimeter.csv and containing at list only one field: "host") containing the list of hosts to monitor and run a search like the following: | tstats count where index=* earliest=-2h latest=now BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 otherwise, if you don't want to create and manage the lookup, you could check if an host sent logs e.g. in the last 30 days but not in the last 2 hours: | tstats count latest(_time) AS _time where index=* earliest=-30d latest=now BY host | where _time<(now()-7200 the second search requires less maintenance but gives you less control. Ciao. Giuseppe
Splunk is not good at finding things which aren't there - normally you need to give it a list of what to expect and then check to see which of those are there. For example, you could create a list of... See more...
Splunk is not good at finding things which aren't there - normally you need to give it a list of what to expect and then check to see which of those are there. For example, you could create a list of hosts that are normally sending events to Splunk and count the events from those hosts over a period of time. Any hosts which don't have events may have stopped sending events.
Good day everyone. I am trying to monitor the environment hosts whether if any stopped sending logs. The challenge here to make through content management > correlation search. So it can be schedu... See more...
Good day everyone. I am trying to monitor the environment hosts whether if any stopped sending logs. The challenge here to make through content management > correlation search. So it can be scheduled every ex: 2 hours. any idea?
@gcusello can deploying load balancer between syslog servers help us to get rid of same log ingesting in 2 syslog servers?
The logic for the groupings is that service 1 and 2 share the same servers. service 3 uses different servers. Therefore if there was an issue with those servers we could see how many services would b... See more...
The logic for the groupings is that service 1 and 2 share the same servers. service 3 uses different servers. Therefore if there was an issue with those servers we could see how many services would be affected. I am trying to abridge the data but still show those specific dependencies between services where they have a lot of shared assets.
This is an example of the dashboard using the groupings based on colours. The first panel without grouping, second one is with. dashboard version="1.1" theme="light"> <label>Network Viz Groupings T... See more...
This is an example of the dashboard using the groupings based on colours. The first panel without grouping, second one is with. dashboard version="1.1" theme="light"> <label>Network Viz Groupings Test</label> <row> <panel> <title>Network Viz No Groups</title> <viz type="network-diagram-viz.network-diagram-viz"> <search> <query>| makeresults | eval _raw=" 'Child Class','Parent Class','from','to' Database,Service,Service1,Database1 Database,Service,Service3,Database1 Database,Service,Service3,Database2 Network,Server,Server3,Network1 Network,Server,Server4,Network1 Server,Server,Server1,Server2 Server,Server,Server2,Server3 Server,Service,Service1,Server3 Server,Service,Service2,Server2 Server,Service,Service3,Server4 Service,Service,Service1,Service2 " | multikv forceheader=1 | fields - _raw, _time, linecount | rename "Parent_Class_" as "Parent Class", "Child_Class_" as "Child Class", from_ as from, to_ as to ```Logic used for color grouping``` | eval color=case('Parent Class'=="Service", "red", 'Parent Class'=="Server", "green",0==0, black)</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> <row> <panel> <title>Network Viz Grouped By Colour</title> <viz type="network-diagram-viz.network-diagram-viz"> <search> <query>| makeresults | eval _raw=" 'Child Class','Parent Class','from','to' Database,Service,Service1,Database1 Database,Service,Service3,Database1 Database,Service,Service3,Database2 Network,Server,Server3,Network1 Network,Server,Server4,Network1 Server,Server,Server1,Server2 Server,Server,Server2,Server3 Server,Service,Service1,Server3 Server,Service,Service2,Server2 Server,Service,Service3,Server4 Service,Service,Service1,Service2 " | multikv forceheader=1 | fields - _raw, _time, linecount | rename "Parent_Class_" as "Parent Class", "Child_Class_" as "Child Class", from_ as from, to_ as to ```Logic used for color grouping``` | eval color=case('Parent Class'=="Service", "red", 'Parent Class'=="Server", "green",0==0, black)</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="network-diagram-viz.network-diagram-viz.arrowLocation">none</option> <option name="network-diagram-viz.network-diagram-viz.canZoom">true</option> <option name="network-diagram-viz.network-diagram-viz.clusterBy">color</option> <option name="network-diagram-viz.network-diagram-viz.defaultLinkLength">100</option> <option name="network-diagram-viz.network-diagram-viz.defaultNodeType">circle</option> <option name="network-diagram-viz.network-diagram-viz.draggableNodes">true</option> <option name="network-diagram-viz.network-diagram-viz.drilldownClick">singleOrDouble</option> <option name="network-diagram-viz.network-diagram-viz.enablePhysics">true</option> <option name="network-diagram-viz.network-diagram-viz.hierarchy">false</option> <option name="network-diagram-viz.network-diagram-viz.hierarchyDirection">Top-Down</option> <option name="network-diagram-viz.network-diagram-viz.hierarchySortMethod">directed</option> <option name="network-diagram-viz.network-diagram-viz.levelSeparation">150</option> <option name="network-diagram-viz.network-diagram-viz.linkTextLocation">bottom</option> <option name="network-diagram-viz.network-diagram-viz.linkTextSize">medium</option> <option name="network-diagram-viz.network-diagram-viz.missingImageURL">/static/app/network-diagram-viz/customimages/404.gif</option> <option name="network-diagram-viz.network-diagram-viz.nodeSpacing">100</option> <option name="network-diagram-viz.network-diagram-viz.nodeTextSize">medium</option> <option name="network-diagram-viz.network-diagram-viz.physicsModel">forceAtlas2Based</option> <option name="network-diagram-viz.network-diagram-viz.shakeTowards">roots</option> <option name="network-diagram-viz.network-diagram-viz.smoothEdgeType">dynamic</option> <option name="network-diagram-viz.network-diagram-viz.smoothEdges">true</option> <option name="network-diagram-viz.network-diagram-viz.tokenNode">nd_node_token</option> <option name="network-diagram-viz.network-diagram-viz.tokenToNode">nd_to_node_token</option> <option name="network-diagram-viz.network-diagram-viz.tokenToolTip">nd_tooltip_token</option> <option name="network-diagram-viz.network-diagram-viz.tokenValue">nd_value_token</option> <option name="network-diagram-viz.network-diagram-viz.wrapNodeText">true</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> </dashboard><   
Hi @sureshkumaar  Are your events across multiple lines? You might have more success with the following transform [setParsing] INGEST_EVAL = queue=IF(match(_raw, "systemd|rsyslogd|auditd"),queue,"... See more...
Hi @sureshkumaar  Are your events across multiple lines? You might have more success with the following transform [setParsing] INGEST_EVAL = queue=IF(match(_raw, "systemd|rsyslogd|auditd"),queue,"nullQueue") Then in your props.conf refer to this for your sourcetype [yourSourcetype] TRANSFORMS-filter1 = setParsing This will set the queue depending on a match within the IF statement   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Maybe you can share your current search that does this grouping. What's your logic for deciding that server 2 and server 3 need to be in a different server group to server 4, is it simply on the pre... See more...
Maybe you can share your current search that does this grouping. What's your logic for deciding that server 2 and server 3 need to be in a different server group to server 4, is it simply on the presence of a connection between those 2 services? @danspav 
Feb 3 11:10:15 server-server-server-server systemd[1]: Removed slice User Slice of UID 0. Feb 3 04:14:23 server-server-server-server rsyslogd[679024]: imjournal: 16021 messages lost due to rate-limi... See more...
Feb 3 11:10:15 server-server-server-server systemd[1]: Removed slice User Slice of UID 0. Feb 3 04:14:23 server-server-server-server rsyslogd[679024]: imjournal: 16021 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 3 11:01:01 server-server-server-server CROND[3905399]: (root) CMDEND (run-parts /etc/cron.hourly) Feb 3 11:10:55 server-server-server-server esfdaemon[3938104]: 0 Feb 3 10:24:36 server-server-server-server auditd[2689]: Audit daemon rotating log files Is there a way to capture the whole line where systemd, rsyslogd and auditd keyword matches using props.conf and transforms.conf? Below captures till the specific keyword, how about remaining lines after the keyword? [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = ^\w{3}\s\s\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}\s+(?:[+\-A-Z0-9]*\s+)?(systemd|rsyslogd|auditd) DEST_KEY = queue FORMAT = indexQueue  
As I said, assuming your events have already been ingested as JSON. It looks like they aren't, or at least the fields you need aren't. Try this | spath all_request_headers | fields _time all_request... See more...
As I said, assuming your events have already been ingested as JSON. It looks like they aren't, or at least the fields you need aren't. Try this | spath all_request_headers | fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers
| fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers   Giving this search after my index and sourcetype, it is showing nothing in events, can... See more...
| fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers   Giving this search after my index and sourcetype, it is showing nothing in events, can you please help @ITWhisperer  
Thanks for providing the raw example. It looks like some of the header fields have quite high entropy, meaning that it could create a lot of values for a dashboard/table. Are they wanting to see rar... See more...
Thanks for providing the raw example. It looks like some of the header fields have quite high entropy, meaning that it could create a lot of values for a dashboard/table. Are they wanting to see rare or most frequent values for these headers, perhaps?  Presume there are some headers such as "Date" which arent going to add much value? As previously mentioned - I think its important to understand the purpose of the dashboard, otherwise the panels created might be meaningless and a waste of search time. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
You could start with a dashboard with a couple of statistics tables using the following searches (assuming your events have already been ingested as JSON) | fields _time all_request_headers | spath ... See more...
You could start with a dashboard with a couple of statistics tables using the following searches (assuming your events have already been ingested as JSON) | fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers | fields _time all_response_headers | spath input=all_response_headers | fields - _raw all_response_headers | fields _time waf_log | spath input=waf_log | fields - _raw waf_log Or you could combine them into a single table | fields _time all_request_headers all_response_headers waf_log | spath input=all_request_headers | spath input=all_response_headers | spath input=waf_log | fields - _raw all_request_headers all_response_headers waf_log To be honest, these won't be very useful but it is what you/they asked for and might help you clarify what exactly they do want to see in the dashboard.
I have a table with hundreds of thousands of rows which I am seeking to visualise in Splunk. The data is far too big for the Network Viz diagram to show without latency issues so I am seeking to gro... See more...
I have a table with hundreds of thousands of rows which I am seeking to visualise in Splunk. The data is far too big for the Network Viz diagram to show without latency issues so I am seeking to group it down and will chunk it further with filters. Users are mainly interested in the services we have and how they connect with other services through our assets. Details like server names are less important so those can be grouped. I am having issues with the default network viz diagram and the grouping behaviour.  Example Data: Parent Child Parent Class Child Class Service1 Service2 Service Service Server1 Server2 Server Server Server2 Server3 Server Server Service1 Server3 Service Server Service2 Server2 Service Server Service3 Server4 Service Server Service1 Database1 Service Database Service3 Database1 Service Database Service3 Database2 Service Database Server3 Network1 Server Network Server4 Network1 Server Network   Desired look below. Notice how there are multiple server groups rather than just one. We can clearly identify that service 1 and 2 are linked through servers. Service 3 is separate and is connected to a different group of servers. And here is my attempt at using the group functionality for the network viz diagram. I used the asset class to make colour groupings. Notice that the groups are just generic and cannot be named. All servers have been grouped together making it look like all 3 services are linked through servers.   Expanding this diagram clearly shows they are not linked through servers. Service 3 is connected to a different server. Is there a way to reach my desired grouping method with the default Splunk tools? Is there another add on I could utilise?
@niketnilay is it possible without using the additional extension  Timeline Custom Visualization
{"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7","report_timestamp":"2025-02-14T15:51:13.176715Z","service_engine":"GB-DRN-AB-Tier2-se-bm... See more...
{"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7","report_timestamp":"2025-02-14T15:51:13.176715Z","service_engine":"GB-DRN-AB-Tier2-se-bmqhk","vcpu_id":0,"log_id":108692,"client_ip":"128.168.178.113","client_src_port":24487,"client_dest_port":443,"client_rtt":1,"ssl_session_id":"bc586cf2272c7130a6e90551566bf12c","ssl_version":"TLSv1.3","ssl_cipher":"TLS_AES_256_GCM_SHA384","sni_hostname":"wasphictst-wdc.hc.cloud.uk.sony","http_version":"1.1","method":"GET","uri_path":"/cmd","uri_query":"test=&& whoami","rewritten_uri_query":"test=%26%26%20whoami","user_agent":"insomnia/2021.5.3","host":"wasphictst-wdc.hc.cloud.uk.sony","persistent_session_id":3472328296900352087,"request_content_type":"text/plain","response_content_type":"text/html; charset=iso-8859-1","request_length":193,"cacheable":true,"pool":"pool-cac2726e-acd1-4225-8ac8-72ebd82a57a6","pool_name":"p-wasphictst-wdc.hc.cloud.uk.sony-wdc-443","server_ip":"128.160.88.68","server_name":"128.160.88.68","server_conn_src_ip":"128.160.77.235","server_dest_port":80,"server_src_port":25921,"server_rtt":3,"server_response_length":373,"server_response_code":404,"server_response_time_first_byte":78,"server_response_time_last_byte":81,"response_length":6148,"response_code":404,"response_time_first_byte":81,"response_time_last_byte":81,"compression_percentage":0,"compression":"NO_COMPRESSION_CAN_BE_COMPRESSED","client_insights":"","request_headers":833,"response_headers":13,"request_state":"AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT","all_request_headers":{"Host":"wasphictst-wdc.hc.cloud.uk.sony","User-Agent":"insomnia/2021.5.3","Cookie":"Cookie1=Jijin","Content-Type":"text/plain","Accept":"*/*","Content-Length":0},"all_response_headers":{"Content-Type":"text/html; charset=iso-8859-1","Content-Length":196,"Connection":"keep-alive","Date":"Fri, 14 Feb 2025 15:51:13 GMT","Server":"Apache/2.4.37 (Red Hat Enterprise Linux)","Strict-Transport-Security":"max-age=31536000; includeSubDomains"},"significant_log":["ADF_HTTP_CONTENT_LENGTH_HDR_WITH_UNSUPPORTED_METHOD","ADF_RESPONSE_CODE_4XX"],"headers_sent_to_server":{"X-Forwarded-For":"128.168.178.113","Host":"wasphictst-wdc.hc.cloud.uk.sony","Content-Length":0,"User-Agent":"insomnia/2021.5.3","Cookie":"Cookie1=Jijin","Content-Type":"text/plain","Accept":"*/*","X-Forwarded-Proto":"https"},"headers_received_from_server":{"Date":"Fri, 14 Feb 2025 15:51:13 GMT","Server":"Apache/2.4.37 (Red Hat Enterprise Linux)","Content-Length":196,"Content-Type":"text/html; charset=iso-8859-1"},"vs_ip":"128.160.71.101","waf_log":{"status":"PASSED","latency_request_header_phase":351,"latency_request_body_phase":1544,"latency_response_header_phase":50,"latency_response_body_phase":15,"rules_configured":true,"psm_configured":false,"application_rules_configured":false,"allowlist_configured":false,"allowlist_processed":false,"rules_processed":true,"psm_processed":false,"application_rules_processed":false,"memory_allocated":71496,"omitted_signature_stats":{"rules":0,"match_elements":0},"omitted_app_rule_stats":{"rules":0,"match_elements":0}},"request_id":"9mY-Spaj-RgC9","servers_tried":1,"jwt_log":{"is_jwt_verified":false},"max_ingress_latency_fe":0,"avg_ingress_latency_fe":0,"conn_est_time_fe":0,"max_ingress_latency_be":0,"avg_ingress_latency_be":0,"conn_est_time_be":0,"source_ip":"128.168.178.113","vs_name":"v-wasphictst-wdc.hc.cloud.uk.sony-443","tenant_name":"admin"}