All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi gcusello, Thanks for your prompt reply. I tried your solution. It's almost perfect, but interface field does not appear. I would appreciate it if you could give me an additional advice to resolve... See more...
Hi gcusello, Thanks for your prompt reply. I tried your solution. It's almost perfect, but interface field does not appear. I would appreciate it if you could give me an additional advice to resolve it. index=gnmi ("tags.next-hop-group"=* OR "tags.index"=*) | rename "tags.next-hop-group" AS tags_next_hop_group "tags.index" AS tags_index "tags.ipv4-entry_prefix" AS ipv4_entry_prefix "tags.network-instance_name" AS network_instance_name | eval tags_index=coalesce(tags_index, tags_next_hop_group) | stats values(ipv4_entry_prefix) AS ipv4_entry_prefix values(network_instance_name) AS network_instance_name values(tags.interface) AS interface BY tags_index | sort ipv4_entry_prefix network_instance_name Result Many thanks, Kenji    
 I highly recommend that you look at the training Splunk offers, this will get you into the deeper aspects and how to administrate Splunk and build up knowledge.  The Splunk Admin courses should g... See more...
 I highly recommend that you look at the training Splunk offers, this will get you into the deeper aspects and how to administrate Splunk and build up knowledge.  The Splunk Admin courses should get your started, the various modules should cover what you are looking for at a deeper level.  https://www.splunk.com/en_us/training/course-catalog.html?sort=Newest&filters=filterGroup3SplunkEnterpriseCloudAdministrator  
Hi Team, I have stats group by fields as token it will change dynamically based on time selection. for example if select since 1st Jun 24 then my query will be like below. eventtype="abc" |stats co... See more...
Hi Team, I have stats group by fields as token it will change dynamically based on time selection. for example if select since 1st Jun 24 then my query will be like below. eventtype="abc" |stats count by a,b,c  and if select date before 1st Jun 2024 i.e 30th May 2024 i would like to have stats group by field like below. eventtype="abc" |stats count by a,d,e So my current implementation is putting group by field in token, token will be set based on time selection and final query would be like below. eventtype="abc" |stats count by $groupby_field$ Now the issue is splunk dashboard says waiting for input the moment i add token input to stats groupby field. Appreciate your suggestion/help to handle this scenario.   Thanks, Mani  
The problem is that that _row does not correspond to linecount=2, but is recognized as 1. I will give you one _row data as an example. (_row recognized as one) 1333561147.74 48957 131.178.... See more...
The problem is that that _row does not correspond to linecount=2, but is recognized as 1. I will give you one _row data as an example. (_row recognized as one) 1333561147.74 48957 131.178.233.243 TCP_DENIED/403 1914 GET http://bewfsnfwka.net/  edgy@demo.com NONE/- - BLOCK_AMW_REQ-DefaultGroup-Demo_Clients-NONE-NONE-NONE <nc,dns,-9,"Trojan- Downloader .Gen",100,13689,586638,-,-,-,-,-,-,-,-,nc,-> - -1262356487.060 16922 131.178.233.243 TCP_REFRESH_HIT/200 474 GET http://damtare.by . ru/id.txt edgy@demo.com DIRECT/damtare.by.ru text/html DEFAULT_CASE-DefaultGroup-Demo_Clients-NONE-NONE-DefaultRouting <IW_scty,-6.9,0,-,-,-,-,0,- ,-,-,-,-,-,-,IW_scty,-> - - () 1333561147.74 48957 131.178.233.243 TCP_DENIED/403 1914 GET http://bewfsnfwka.net/  edgy@demo.com NONE/- - BLOCK_AMW_REQ-DefaultGroup-Demo_Clients-NONE-NONE-NONE <nc,dns,-9,"Trojan- Downloader .Gen",100,13689,586638,-,-,-,-,-,-,-,-,nc,-> - - (2) 1262356487.060 16922 131.178.233.243 TCP_REFRESH_HIT/200 474 GET http://damtare.by.ru/id.txt  edgy@demo.com DIRECT/damtare.by.ru text/html DEFAULT_CASE-DefaultGroup-Demo_Clients-NONE-NONE-DefaultRout ing < IW_scty,-6.9,0,-,-,-,-,0,-,-,-,-,-,-,-,IW_scty,-> - - (1), (2) How do I separate them into each _row? Please give an example of a regular expression that needs to be separated. thank you,
There are many ways to get the results, as @bowesmana and @emdaax show.  One more alternative is json_extract_exact (JSON functions were introduced in 8.1) | eval hits = json_extract(json_extract_ex... See more...
There are many ways to get the results, as @bowesmana and @emdaax show.  One more alternative is json_extract_exact (JSON functions were introduced in 8.1) | eval hits = json_extract(json_extract_exact(json_extract(payload, "cacheStats"), "lds:UiApi.getRecord"), "hits")  
Actually, while your technique is correct, as you are ONLY interested in count of duration>p99, you should use the fields statement to ONLY send the data you care about to the search head, i.e. inde... See more...
Actually, while your technique is correct, as you are ONLY interested in count of duration>p99, you should use the fields statement to ONLY send the data you care about to the search head, i.e. index=foo message="magic string" | fields - _raw | fields duration | eventstats p99(duration) as p99val | where duration > p99val | stats count as "# of Events with Duration > p99" those two fields statements will mean that the only piece of data being sent to the SH is 'duration'
I do not get the question.  Except that you need to put that value into eval, the search does give you 060624.  Isn't this what you are looking for?  What is the question?   | makeresults | eval ti... See more...
I do not get the question.  Except that you need to put that value into eval, the search does give you 060624.  Isn't this what you are looking for?  What is the question?   | makeresults | eval time=1717690912746 | eval readable_time = strftime(strptime(tostring(time/1000), "%s"), "%m%d%y") | table time, readable_time   This is what I get time readable_time 1717690912746 060624  
Do you mean something like this? index =windows product=Windows (EventCode="4609" OR EventCode="4608" OR EventCode="6008") NOT (EventCode=4608 earliest=-5m) | table _time name host dvc EventCode sev... See more...
Do you mean something like this? index =windows product=Windows (EventCode="4609" OR EventCode="4608" OR EventCode="6008") NOT (EventCode=4608 earliest=-5m) | table _time name host dvc EventCode severity Message  
While that solution does work it's equally slow as the one I posted.  Thank you, but I'm still hoping there's a better/faster way
Hi @HPACHPANDE , please try something like this: index =windows product=Windows EventCode="4609" OR EventCode="6008" OR (EventCode="4608" AND _time<now()-300) | table _time name host dvc EventCode ... See more...
Hi @HPACHPANDE , please try something like this: index =windows product=Windows EventCode="4609" OR EventCode="6008" OR (EventCode="4608" AND _time<now()-300) | table _time name host dvc EventCode severity Message I'm not sure that it's possible to add the last condition in the main search, please try, if doesn't run, pleaase try this: index =windows product=Windows EventCode="4609" OR EventCode="4608" OR EventCode="6008" | where EventCode="4608" AND _time<now()-300 | table _time name host dvc EventCode severity Message Ciao. Giuseppe
What about   | stats values(tags.ipv4-entry_prefix) as ipv4-entry_prefix values(tags.network-instance_name) as network-instance_name values(values.interface) as interface   or   | fields *.ipv4... See more...
What about   | stats values(tags.ipv4-entry_prefix) as ipv4-entry_prefix values(tags.network-instance_name) as network-instance_name values(values.interface) as interface   or   | fields *.ipv4-entry_prefix *.network-instance_name *.interface | stats values(*) as *   The latter will give tags.ipv4-entry_prefix tags.network-instance_name values.interface 1.1.1.0/24 VRF_1001 Ethernet48  
Hi @learningmode , I suppose that you know the origin of your data in terms of hostnames. so you could try to upload in Splunk Cloud a custom add-on that overrides the index value based on the host... See more...
Hi @learningmode , I suppose that you know the origin of your data in terms of hostnames. so you could try to upload in Splunk Cloud a custom add-on that overrides the index value based on the hostname. In addition, it's a best case to have an Heavy Forwarder for each customer as a concentrator, in this case you could add your tenant field as meta. In this case you don't need to upload the add-on in Splunk Cloud, but you could do the override job on it. Ciao. Giuseppe
The performance issue is using eventstats, which means all the data is pushed to the search head to do the calculations. It's not the ultimate stats that is actually slow. Unfortunately I believe thi... See more...
The performance issue is using eventstats, which means all the data is pushed to the search head to do the calculations. It's not the ultimate stats that is actually slow. Unfortunately I believe this is the general technique for comparing event values against any event aggregation. One slightly left of field alternative is to pre-calculate the p99 and write that to a lookup, e.g. index=foo message="magic string" | stats p99(duration) as p99 | outputlookup p99_tmp.csv and then the second search just does this, which, if you are using the same time ranges and have a large amount of data, may be quicker. index=foo message="magic string" | eval p99=[ | inputlookup tmp.csv | return $p99] | where duration>p99 | stats count as "# of Events with Duration >= p99"  
Hi @shimada-k , please try this: index=your_index ("tags.next-hop-group"=* OR "tags.index"=*) | rename "tags.next-hop-group" AS tags_next_hop_group "tags.index" AS tags_index "ipv4-... See more...
Hi @shimada-k , please try this: index=your_index ("tags.next-hop-group"=* OR "tags.index"=*) | rename "tags.next-hop-group" AS tags_next_hop_group "tags.index" AS tags_index "ipv4-entry_prefix" AS ipv4_entry_prefix "network-instance_name" AS network_instance_name | eva tags_index=coalesce(tags_index, tags_next_hop_group) | stats vaues(ipv4_entry_prefix) AS ipv4_entry_prefix values(network_instance_name) AS network_instance_name values(interface) AS interface BY tags_next_hop_group in other words, you have to coalesce events with the fields "tags.next-hop-group" and "tags.index" and use it as key in a stats command. I had to rename your fields because sometimes eval and stats commands doesn't correctly work when inside the field there are spaces, dots or minus char. Ciao. Giuseppe
Thank you for your answer. All these points have already been kind of analyzed and taken into account. I was expecting more insights / inputs from experience and challenges that splunkers may hav... See more...
Thank you for your answer. All these points have already been kind of analyzed and taken into account. I was expecting more insights / inputs from experience and challenges that splunkers may have faced. However I do know as well that our setup is a bit typical and does not reflect most of the enterprise setups that others may work with.
Hi Experts, I would like to create the following table from the three events.    ipv4-entry_prefix network-instance_name interface ----------------------------------------------... See more...
Hi Experts, I would like to create the following table from the three events.    ipv4-entry_prefix network-instance_name interface ---------------------------------------------------------------------- 1.1.1.0/24 VRF_1001 Ethernet48   Both event#1 and event#2 have "tags.next-hop-group" field and both event#2 and event#3 have "tags.index" field.All events are stored in the same index. I tried to write a proper SPL to achieve the above, but I couldn't. Could you please tell me how to achieve this?   - event#1 { "name": "fib", "timestamp": 1717571778600, "tags": { "ipv4-entry_prefix": "1.1.1.0/24", "network-instance_name": "VRF_1001", "next-hop-group": "1297036705567609741", "source": "r0", "subscription-name": "fib" } } - event#2 { "name": "fib", "timestamp": 1717572745136, "tags": { "index": "140400192798928", "network-instance_name": "VRF_1001", "next-hop-group": "1297036705567609741", "source": "r0", "subscription-name": "fib" }, "values": { "index": "140400192798928" } } -event#3 { "name": "fib", "timestamp": 1717572818890, "tags": { "index": "140400192798928", "network-instance_name": "VRF_1001", "source": "r0", "subscription-name": "fib" }, "values": { "interface": "Ethernet48" }   Many thanks, Kenji
Hi  I think it works for you: index=foo message="magic string" | eventstats p99(duration) as p99val | stats count(eval(duration > p99val)) as count
i got error like  There was an error processing the upload.Error during app install: failed to extract app from C:\Windows\TEMP\tmp5kgytoy5 to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\2d60... See more...
i got error like  There was an error processing the upload.Error during app install: failed to extract app from C:\Windows\TEMP\tmp5kgytoy5 to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\2d60c8764b856899: The system cannot find the path specified.
I have something that I think works, but I don't know how (in)efficient it is:     index=foo message="magic string" | eventstats p99(duration) as p99val | where duration > p99val | stats count as ... See more...
I have something that I think works, but I don't know how (in)efficient it is:     index=foo message="magic string" | eventstats p99(duration) as p99val | where duration > p99val | stats count as "# of Events with Duration > p99"     It seems to take a long time to complete as soon as I add in the "| stats count" bit.  Simply getting events seems pretty quick. Is this a good approach and/or how can I improve it?
Thanks for the reply,   Is it normal to see a sourcetype configured for 3 sources where they are files on a linux box, they're got the same timestamps pattern but the logs are different and the lin... See more...
Thanks for the reply,   Is it normal to see a sourcetype configured for 3 sources where they are files on a linux box, they're got the same timestamps pattern but the logs are different and the lines count is different too