All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can't do it directly since when you so timechart by a field, it will get split. So you have to improvise. EDIT: Missed the fact that was avg(), not sum(). Of course summing averages is not the w... See more...
You can't do it directly since when you so timechart by a field, it will get split. So you have to improvise. EDIT: Missed the fact that was avg(), not sum(). Of course summing averages is not the way to go so @ITWhisperer 's solution is the one to go for. The obvious solution already provided is timechart | addtotals. You could also try to manually bin _time and stats but it boils down to the same thing. Several caveats: 1) Careful with rounding. 2) Do fillnull if you can expect the by-field to be empty sometimes. Otherwise your total will be wrong. 3) Either limit=0 or useother=t - without it you'll lose data for the sum.
Yes and I don't think that's what I want.  That seems to sum the split values, I want the non-split (effectively average) value.  If there were a similar avgtotals that would probably be what I'm loo... See more...
Yes and I don't think that's what I want.  That seems to sum the split values, I want the non-split (effectively average) value.  If there were a similar avgtotals that would probably be what I'm looking for.
The goal is to calculate an overhead value over a span of 1 second. Overhead is calcuated as being the difference between totaltime and routingtime.  Then for each host as identified by hostname, cre... See more...
The goal is to calculate an overhead value over a span of 1 second. Overhead is calcuated as being the difference between totaltime and routingtime.  Then for each host as identified by hostname, create a line chart that shows the overhead for each host, and include another line on the chart that shows the average overhead across all hosts. Here are a few anonymized sample records: {"severity":"Audit","hostname":"ahost02","received":"2025-01-14T19:12:44.623Z","protocol":"http","routingtime":189,"totaltime":234} {"severity":"Audit","hostname":"ahost01","received":"2025-01-14T19:12:44.650Z","protocol":"https","routingtime":27,"totaltime":78} {"severity":"Audit","hostname":"ahost01","received":"2025-01-14T19:12:44.634Z","protocol":"http","routingtime":36,"totaltime":74} {"severity":"Audit","hostname":"ahost02","received":"2025-01-14T19:12:44.427Z","protocol":"http","routingtime":205,"totaltime":220}
I cannot see an option how this can do without any configuration on onprem side. Usually clients approve some configuration changes if they really want this and when those options have explained to ... See more...
I cannot see an option how this can do without any configuration on onprem side. Usually clients approve some configuration changes if they really want this and when those options have explained to them.
And if the client does not accept any type of configuration, is it possible to extract the information or events using Splunk's APIs?
@isoutamo Sorry, my bad. Not sure, how ended up finding that post. I will keep in mind.
What does your expected output look like?
Not sure, how I ended up responding to a solved question, sorry. @diogofgm thanks, I keep forgetting using btool  So when I run the command you suggested, I see {default] section earlier than my ... See more...
Not sure, how I ended up responding to a solved question, sorry. @diogofgm thanks, I keep forgetting using btool  So when I run the command you suggested, I see {default] section earlier than my specific index like, [ubunt], [rhel]. So I assume, the whatever came 1st under [default] (in my case, "frozenTimePeriodInSecs") would apply and no what I have under [ubuntu] or [rhel], correct? Thanks for your help. 
If needed you could add suitable props.conf + transforms.conf on indexers or if you have intermediate HF before on prem indexers to do this. I said that better to have separate HFs before indexers an... See more...
If needed you could add suitable props.conf + transforms.conf on indexers or if you have intermediate HF before on prem indexers to do this. I said that better to have separate HFs before indexers and if possible use those only with those UFs which contains data for this index. Currently you could also use federated search to search those events on SCP even those are stored in on prem.  Based on your use case you could chose between those options.
Have you look addtotals? https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Addtotals
One small issue with this logic: eval day_number=floor(day/7)+1 As it results in the 7th, 14th, 21st, and 28th reporting in the following week.  Week 1 should be days 1-7, 2 would be 8-14, etc. ... See more...
One small issue with this logic: eval day_number=floor(day/7)+1 As it results in the 7th, 14th, 21st, and 28th reporting in the following week.  Week 1 should be days 1-7, 2 would be 8-14, etc. You need to modify it slightly to land those days on the proper week because they're evenly divisible and result in a +1 to the week they're actually in. eval day_number=floor((day-1)/7)+1 And this is an old post, but since I'm using this logic and much appreciate the solution, thought I'd point out the slight tweak needed for it to work 100% if anyone searches in the future.
Please provide some anonymised sample events, a description in non-SPL terms of how the events are to be processed and how they relate to an expected output.
You should create a new question instead of continuing with solved one. In indexes.conf and other you should look https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Indexesconf and check what ... See more...
You should create a new question instead of continuing with solved one. In indexes.conf and other you should look https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Indexesconf and check what those sections means. With indexes there are global section which put some global values and also some defaults for all indexes stanzas. Per indexes part are defined attributes and values for individual index. There are some items which can only defined here and some which can defined also on global level if those are defined on both then index specific wons. There is also app https://splunkbase.splunk.com/app/6368 which you could use inside GUI without  cli access.
This is different from what you originally asked for. Worse than that, the expected output is subtly different to your input events. Please can you explain precisely how the input events are to be pr... See more...
This is different from what you originally asked for. Worse than that, the expected output is subtly different to your input events. Please can you explain precisely how the input events are to be processed to give the expected output?
Server team conducted patches and stopped nginx from running. 
@diogofgm thanks, I keep forgetting using btool So when I run the command you suggested, I see {default] section earlier than my specific index like, [ubunt], [rhel]. So I assume, the whatever ca... See more...
@diogofgm thanks, I keep forgetting using btool So when I run the command you suggested, I see {default] section earlier than my specific index like, [ubunt], [rhel]. So I assume, the whatever came 1st under [default] (in my case, "frozenTimePeriodInSecs") would apply and no what I have under [ubuntu] or [rhel], correct? Thanks for your help. 
I have a timechart that shows a calculated value split by hostname, Ex: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) by hostname What I a... See more...
I have a timechart that shows a calculated value split by hostname, Ex: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) by hostname What I am trying to do is also show the calculated overhead value not split by hostname: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) How do I show the split out overhead values and the combined overhead value in the same timechart?
This is an example of the structure of my data and the query I am currently using. I have tried around 10 different solutions based on various examples from stackoverflow.com and  community.splunk.co... See more...
This is an example of the structure of my data and the query I am currently using. I have tried around 10 different solutions based on various examples from stackoverflow.com and  community.splunk.com. But I have not figured out how to change this query such that eval Tag = "Tag1" can become an array eval Tags = ["Tag1", "Tag4"] and I will get entries for all tags that exist in the array. Could someone guide me in the right direction?   | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ], \"UserTags\": [ \"Tag2\", \"Tag3\" ] }, \"MessageQueue\": { \"ReportTags\": [ \"Tag1\", \"Tag4\" ], \"UserTags\": [ \"Tag3\", \"Tag4\", \"Tag5\" ] }, \"Frontend\": { \"ClientTags\": [ \"Tag12\", \"Tag47\" ] } } } }" | eval Tag = "Tag1" | spath | foreach *ReportTags{} [| eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower(Tag), "<<FIELD>>", null()))] | dedup tags | stats values(tags)  
Hi team Is there a way to connect the splunk cloud platform with splunk on-prem, this to send a specific index to splunk on-prem? Since the client does not allow modifications to the universal forw... See more...
Hi team Is there a way to connect the splunk cloud platform with splunk on-prem, this to send a specific index to splunk on-prem? Since the client does not allow modifications to the universal forwarder agents.   Regards
@danielbb Please have a look.