All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Also i can't find anything in the Splunk Enterprise. Nothing in forwarder management section and no data whatsoever
I am newbie to splunk. Any help is appreciated So I have an splunk enterprise in my windows computer. and splunk forwarder in a ubuntu VPS server with a cowrie honeypot built in. So my problem is wh... See more...
I am newbie to splunk. Any help is appreciated So I have an splunk enterprise in my windows computer. and splunk forwarder in a ubuntu VPS server with a cowrie honeypot built in. So my problem is when i try to ping test my local computer with VPS server , i have %100 packet loss. Also splunkd log file is full of "cooked connection to "my-local-ip" timed out and ... blocked nfor blocked_seconds=3000. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. errors Thanks for helping. I am waiting for your response
@jrs42  you can use 'stats' instead of 'eventstats' to optimize : index=foo message="magic string" | stats p99(duration) as p99val, count(eval(duration > p99(duration))) as count
Hi @shimada-k , sorry I mistyped the field name, probably the interface field name is different, probably its only "interface", please see the exact field name and replace it in the search: index=... See more...
Hi @shimada-k , sorry I mistyped the field name, probably the interface field name is different, probably its only "interface", please see the exact field name and replace it in the search: index=gnmi ("tags.next-hop-group"=* OR "tags.index"=*) | rename "tags.next-hop-group" AS tags_next_hop_group "tags.index" AS tags_index "tags.ipv4-entry_prefix" AS ipv4_entry_prefix "tags.network-instance_name" AS network_instance_name | eval tags_index=coalesce(tags_index, tags_next_hop_group) | stats values(ipv4_entry_prefix) AS ipv4_entry_prefix values(network_instance_name) AS network_instance_name values(interface) AS interface BY tags_index | sort ipv4_entry_prefix network_instance_name Ciao. Giuseppe
Tried this but have no luck with this.
Not helpful as all the fields are correct.
Assuming you are changing the groupby_field token in the change handler of the time selection input, which is essentially the input that is being waited for, you could also initialise the groupby_fie... See more...
Assuming you are changing the groupby_field token in the change handler of the time selection input, which is essentially the input that is being waited for, you could also initialise the groupby_field token in an init block in SimpleXML - it is perhaps a little more complicated to do in Studio
Hi gcusello, Thanks for your prompt reply. I tried your solution. It's almost perfect, but interface field does not appear. I would appreciate it if you could give me an additional advice to resolve... See more...
Hi gcusello, Thanks for your prompt reply. I tried your solution. It's almost perfect, but interface field does not appear. I would appreciate it if you could give me an additional advice to resolve it. index=gnmi ("tags.next-hop-group"=* OR "tags.index"=*) | rename "tags.next-hop-group" AS tags_next_hop_group "tags.index" AS tags_index "tags.ipv4-entry_prefix" AS ipv4_entry_prefix "tags.network-instance_name" AS network_instance_name | eval tags_index=coalesce(tags_index, tags_next_hop_group) | stats values(ipv4_entry_prefix) AS ipv4_entry_prefix values(network_instance_name) AS network_instance_name values(tags.interface) AS interface BY tags_index | sort ipv4_entry_prefix network_instance_name Result Many thanks, Kenji    
 I highly recommend that you look at the training Splunk offers, this will get you into the deeper aspects and how to administrate Splunk and build up knowledge.  The Splunk Admin courses should g... See more...
 I highly recommend that you look at the training Splunk offers, this will get you into the deeper aspects and how to administrate Splunk and build up knowledge.  The Splunk Admin courses should get your started, the various modules should cover what you are looking for at a deeper level.  https://www.splunk.com/en_us/training/course-catalog.html?sort=Newest&filters=filterGroup3SplunkEnterpriseCloudAdministrator  
Hi Team, I have stats group by fields as token it will change dynamically based on time selection. for example if select since 1st Jun 24 then my query will be like below. eventtype="abc" |stats co... See more...
Hi Team, I have stats group by fields as token it will change dynamically based on time selection. for example if select since 1st Jun 24 then my query will be like below. eventtype="abc" |stats count by a,b,c  and if select date before 1st Jun 2024 i.e 30th May 2024 i would like to have stats group by field like below. eventtype="abc" |stats count by a,d,e So my current implementation is putting group by field in token, token will be set based on time selection and final query would be like below. eventtype="abc" |stats count by $groupby_field$ Now the issue is splunk dashboard says waiting for input the moment i add token input to stats groupby field. Appreciate your suggestion/help to handle this scenario.   Thanks, Mani  
The problem is that that _row does not correspond to linecount=2, but is recognized as 1. I will give you one _row data as an example. (_row recognized as one) 1333561147.74 48957 131.178.... See more...
The problem is that that _row does not correspond to linecount=2, but is recognized as 1. I will give you one _row data as an example. (_row recognized as one) 1333561147.74 48957 131.178.233.243 TCP_DENIED/403 1914 GET http://bewfsnfwka.net/  edgy@demo.com NONE/- - BLOCK_AMW_REQ-DefaultGroup-Demo_Clients-NONE-NONE-NONE <nc,dns,-9,"Trojan- Downloader .Gen",100,13689,586638,-,-,-,-,-,-,-,-,nc,-> - -1262356487.060 16922 131.178.233.243 TCP_REFRESH_HIT/200 474 GET http://damtare.by . ru/id.txt edgy@demo.com DIRECT/damtare.by.ru text/html DEFAULT_CASE-DefaultGroup-Demo_Clients-NONE-NONE-DefaultRouting <IW_scty,-6.9,0,-,-,-,-,0,- ,-,-,-,-,-,-,IW_scty,-> - - () 1333561147.74 48957 131.178.233.243 TCP_DENIED/403 1914 GET http://bewfsnfwka.net/  edgy@demo.com NONE/- - BLOCK_AMW_REQ-DefaultGroup-Demo_Clients-NONE-NONE-NONE <nc,dns,-9,"Trojan- Downloader .Gen",100,13689,586638,-,-,-,-,-,-,-,-,nc,-> - - (2) 1262356487.060 16922 131.178.233.243 TCP_REFRESH_HIT/200 474 GET http://damtare.by.ru/id.txt  edgy@demo.com DIRECT/damtare.by.ru text/html DEFAULT_CASE-DefaultGroup-Demo_Clients-NONE-NONE-DefaultRout ing < IW_scty,-6.9,0,-,-,-,-,0,-,-,-,-,-,-,-,IW_scty,-> - - (1), (2) How do I separate them into each _row? Please give an example of a regular expression that needs to be separated. thank you,
There are many ways to get the results, as @bowesmana and @emdaax show.  One more alternative is json_extract_exact (JSON functions were introduced in 8.1) | eval hits = json_extract(json_extract_ex... See more...
There are many ways to get the results, as @bowesmana and @emdaax show.  One more alternative is json_extract_exact (JSON functions were introduced in 8.1) | eval hits = json_extract(json_extract_exact(json_extract(payload, "cacheStats"), "lds:UiApi.getRecord"), "hits")  
Actually, while your technique is correct, as you are ONLY interested in count of duration>p99, you should use the fields statement to ONLY send the data you care about to the search head, i.e. inde... See more...
Actually, while your technique is correct, as you are ONLY interested in count of duration>p99, you should use the fields statement to ONLY send the data you care about to the search head, i.e. index=foo message="magic string" | fields - _raw | fields duration | eventstats p99(duration) as p99val | where duration > p99val | stats count as "# of Events with Duration > p99" those two fields statements will mean that the only piece of data being sent to the SH is 'duration'
I do not get the question.  Except that you need to put that value into eval, the search does give you 060624.  Isn't this what you are looking for?  What is the question?   | makeresults | eval ti... See more...
I do not get the question.  Except that you need to put that value into eval, the search does give you 060624.  Isn't this what you are looking for?  What is the question?   | makeresults | eval time=1717690912746 | eval readable_time = strftime(strptime(tostring(time/1000), "%s"), "%m%d%y") | table time, readable_time   This is what I get time readable_time 1717690912746 060624  
Do you mean something like this? index =windows product=Windows (EventCode="4609" OR EventCode="4608" OR EventCode="6008") NOT (EventCode=4608 earliest=-5m) | table _time name host dvc EventCode sev... See more...
Do you mean something like this? index =windows product=Windows (EventCode="4609" OR EventCode="4608" OR EventCode="6008") NOT (EventCode=4608 earliest=-5m) | table _time name host dvc EventCode severity Message  
While that solution does work it's equally slow as the one I posted.  Thank you, but I'm still hoping there's a better/faster way
Hi @HPACHPANDE , please try something like this: index =windows product=Windows EventCode="4609" OR EventCode="6008" OR (EventCode="4608" AND _time<now()-300) | table _time name host dvc EventCode ... See more...
Hi @HPACHPANDE , please try something like this: index =windows product=Windows EventCode="4609" OR EventCode="6008" OR (EventCode="4608" AND _time<now()-300) | table _time name host dvc EventCode severity Message I'm not sure that it's possible to add the last condition in the main search, please try, if doesn't run, pleaase try this: index =windows product=Windows EventCode="4609" OR EventCode="4608" OR EventCode="6008" | where EventCode="4608" AND _time<now()-300 | table _time name host dvc EventCode severity Message Ciao. Giuseppe
What about   | stats values(tags.ipv4-entry_prefix) as ipv4-entry_prefix values(tags.network-instance_name) as network-instance_name values(values.interface) as interface   or   | fields *.ipv4... See more...
What about   | stats values(tags.ipv4-entry_prefix) as ipv4-entry_prefix values(tags.network-instance_name) as network-instance_name values(values.interface) as interface   or   | fields *.ipv4-entry_prefix *.network-instance_name *.interface | stats values(*) as *   The latter will give tags.ipv4-entry_prefix tags.network-instance_name values.interface 1.1.1.0/24 VRF_1001 Ethernet48  
Hi @learningmode , I suppose that you know the origin of your data in terms of hostnames. so you could try to upload in Splunk Cloud a custom add-on that overrides the index value based on the host... See more...
Hi @learningmode , I suppose that you know the origin of your data in terms of hostnames. so you could try to upload in Splunk Cloud a custom add-on that overrides the index value based on the hostname. In addition, it's a best case to have an Heavy Forwarder for each customer as a concentrator, in this case you could add your tenant field as meta. In this case you don't need to upload the add-on in Splunk Cloud, but you could do the override job on it. Ciao. Giuseppe
The performance issue is using eventstats, which means all the data is pushed to the search head to do the calculations. It's not the ultimate stats that is actually slow. Unfortunately I believe thi... See more...
The performance issue is using eventstats, which means all the data is pushed to the search head to do the calculations. It's not the ultimate stats that is actually slow. Unfortunately I believe this is the general technique for comparing event values against any event aggregation. One slightly left of field alternative is to pre-calculate the p99 and write that to a lookup, e.g. index=foo message="magic string" | stats p99(duration) as p99 | outputlookup p99_tmp.csv and then the second search just does this, which, if you are using the same time ranges and have a large amount of data, may be quicker. index=foo message="magic string" | eval p99=[ | inputlookup tmp.csv | return $p99] | where duration>p99 | stats count as "# of Events with Duration >= p99"