All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Simple query, actually. index=ourindex earliest=epoch1 latest=epoch2 Command line query returns 16 events, same query in UI returns 8 events. The 16 events have 8 duplicates. Running 9.1.1 Splunk ... See more...
Simple query, actually. index=ourindex earliest=epoch1 latest=epoch2 Command line query returns 16 events, same query in UI returns 8 events. The 16 events have 8 duplicates. Running 9.1.1 Splunk Enterprise. Search head cluster.
thanks @dural_yyz . I was thinking of a solution where for a specific token I could enable HTTP protocol. I infer based on your comment and @jawahir007 comment, I infer that its a global setting and ... See more...
thanks @dural_yyz . I was thinking of a solution where for a specific token I could enable HTTP protocol. I infer based on your comment and @jawahir007 comment, I infer that its a global setting and cannot be changed for a specific token. I wonder why Splunk recommends to use HTTP for performance optimisation(referring to below statement from listed ref link). Sending data over HTTP results in a significant performance improvement compared to sending data over HTTPS. Troubleshoot HTTP Event Collector - Splunk Documentation
Based on your search, I believe that index metrics is not a metrics index, but an event index.  Is this correct?  The fundamental idea to meet your idea is to not use timechart before we can detect c... See more...
Based on your search, I believe that index metrics is not a metrics index, but an event index.  Is this correct?  The fundamental idea to meet your idea is to not use timechart before we can detect change.  Here is an alternative if you have to use index search.   index=metrics host=* | rex field=host "^(?<host>[\w\d-]+)\." | lookup dns.csv sd_hostname AS host | bin _time span=1s | stats rate(Query) as QPS by Site _time | bin _time span=5m | stats avg(QPS) as QPS by Site _time | streamstats window=2 global=false current=true stdev(QPS) as devF by Site | sort Site, - _time | streamstats window=2 global=false current=true stdev(QPS) as devB by Site | where 4*devF > QPS OR 4*devB > QPS | timechart span=5m values(QPS) by Site   To get to this step, I have to run timebucket twice to get 5-min average of QPS. (When you run per_second in timechart span=5m, I suspect it gives you an average of sorts.) I ran this against an emulation that you can also run,   index = _audit earliest=-0d@d latest=-0d@d+1h | rename action as Site | streamstats count as Query by Site ``` the above emulates index=metrics host=* | rex field=host "^(?<host>[\w\d-]+)\." | lookup dns.csv sd_hostname AS host ```   The caveat is that if a Site has no query in some 5-minute intervals then have a significant change later, you won't get connected points on the graph.
So this is a global setting and I cannot choose protocol per token is it?
@martialartjesse Glad to know it helped someone else. Send over some karma if you don't mind. Thanks.
Perfect. Thank you @ITWhisperer, your solution worked.
| streamstats count as row | eventstats max(row) as total | where row = 1 OR row = total OR row = floor(total / 2)
Year and a half later but just wanted to let you know this solution saved my bacon, thanks!
Thanks @ITWhisperer . So in the screenshot below, we see that 170 statistics were returned. I would like the query to return the 1st, the 85th/86th, and the 170th statistic, instead of all 170 of the... See more...
Thanks @ITWhisperer . So in the screenshot below, we see that 170 statistics were returned. I would like the query to return the 1st, the 85th/86th, and the 170th statistic, instead of all 170 of them. Is there a way to accomplish this? sample
At first glance it does not seem that that SC4S_IMAGE exists or is accessible. If you try to docker pull it, it says it either does not exist or needs credentials.  Could you check the journalctl lo... See more...
At first glance it does not seem that that SC4S_IMAGE exists or is accessible. If you try to docker pull it, it says it either does not exist or needs credentials.  Could you check the journalctl logs for the service to see if there are errors or notes around that error which would add context to it? sudo journalctl -u sc4s.service
Please explain what you mean by "duplicate" results, and what is your search, does this happen for just one search or all searches, does it happen for all timeframes or just certain ones?
If you do not specify an index= filter in your search, then Splunk will search your role's default indexes, which can be toggled in the role settings. If you have no default indexes or no data in you... See more...
If you do not specify an index= filter in your search, then Splunk will search your role's default indexes, which can be toggled in the role settings. If you have no default indexes or no data in your default indexes, then no results will appear. The purpose of the cim_Endpoint_indexes macro is to list the indexes from which to find data to populate the data model, so you /should/ be able to list your index filters in there. E.g. index=ABC or index IN (ABC,DEF) The problem is that your Splunk instance is returning a 500 Internal Server Error when you try to edit the macro. In a working system it would not do that. Can you check the web_service.log to see what is causing the problem? If you can access the shell of your splunk search head and it is Linux, then the log should be findable at: /opt/splunk/var/log/splunk/web_service.log
Please explain, with some examples, what is in "statistics", what you mean by "first", "last" and "middle" and how this relates to the two averages that your stats command is returning.
I have this query index=x host=y "searchTerm" | stats Avg(Field1) Avg(Field2) which returns a count of N statistics. I would like modify my query such that (first stats value) statistics[0], ... See more...
I have this query index=x host=y "searchTerm" | stats Avg(Field1) Avg(Field2) which returns a count of N statistics. I would like modify my query such that (first stats value) statistics[0], (middle stats value) ((statistics[0]+statistics[N])/length(statistics)), (final stats value) statistics(N) are returned in the same query I have tried using head and tail but that still limits it to the specified value after 'head' or 'tail'. What other options are available?  
What would cause a command line query ( bin/splunk search "..." ) to return duplicate results over what the UI would return?
I have had success in the past by base-64 encoding the image in an <img> tag and sending the html email. Can you try that? As for formatting the email, you could try writing the email in your favori... See more...
I have had success in the past by base-64 encoding the image in an <img> tag and sending the html email. Can you try that? As for formatting the email, you could try writing the email in your favorite email client, then viewing the source code, then copy-and-pasting it into the SOAR email action html body field. It may still need some tweaking, but most of the formatting should be preserved.
In your dropdown, you seem to have the fieldForLabel and fieldForValue both set to "apps", but in your dynamic query you have used the table command to filter the fields down to only the "APPLICATION... See more...
In your dropdown, you seem to have the fieldForLabel and fieldForValue both set to "apps", but in your dynamic query you have used the table command to filter the fields down to only the "APPLICATION" field. Therefore no results will appear except for the default "All". I recommend changing fieldForLabel and fieldForValue to "APPLICATION".
Have a go at this: index=* "daily.cvd" | dedup host | table host | append [| inputlookup hosts.csv] | stats count by host | where count = 1 | lookup hosts.csv host outputnew host as host_found | eva... See more...
Have a go at this: index=* "daily.cvd" | dedup host | table host | append [| inputlookup hosts.csv] | stats count by host | where count = 1 | lookup hosts.csv host outputnew host as host_found | eval status = if(isnull(host_found),"NEW","MISSING") | table host status Make sure you have a lookup table (hosts.csv) with a single "host" column containing all your expected hosts.
It might be possible to hotwire the app, but you would void the warranty. It would also be worth reading through the terms of use to ensure you are not breaking it. The app itself should, in theory,... See more...
It might be possible to hotwire the app, but you would void the warranty. It would also be worth reading through the terms of use to ensure you are not breaking it. The app itself should, in theory, provide code that makes an API call to the Splunk AI servers. I could not tell you for sure as I have only used the Preview version.
Either two columns as you described, or two columns with machines that SHOULD appear and another column saying Missing if it's not there or New if it's new and unexpected. That way I wouldn't need to... See more...
Either two columns as you described, or two columns with machines that SHOULD appear and another column saying Missing if it's not there or New if it's new and unexpected. That way I wouldn't need to look through them as thoroughly and at a glance be able to see if something is wrong.