All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So this is a global setting and I cannot choose protocol per token is it?
@martialartjesse Glad to know it helped someone else. Send over some karma if you don't mind. Thanks.
Perfect. Thank you @ITWhisperer, your solution worked.
| streamstats count as row | eventstats max(row) as total | where row = 1 OR row = total OR row = floor(total / 2)
Year and a half later but just wanted to let you know this solution saved my bacon, thanks!
Thanks @ITWhisperer . So in the screenshot below, we see that 170 statistics were returned. I would like the query to return the 1st, the 85th/86th, and the 170th statistic, instead of all 170 of the... See more...
Thanks @ITWhisperer . So in the screenshot below, we see that 170 statistics were returned. I would like the query to return the 1st, the 85th/86th, and the 170th statistic, instead of all 170 of them. Is there a way to accomplish this? sample
At first glance it does not seem that that SC4S_IMAGE exists or is accessible. If you try to docker pull it, it says it either does not exist or needs credentials.  Could you check the journalctl lo... See more...
At first glance it does not seem that that SC4S_IMAGE exists or is accessible. If you try to docker pull it, it says it either does not exist or needs credentials.  Could you check the journalctl logs for the service to see if there are errors or notes around that error which would add context to it? sudo journalctl -u sc4s.service
Please explain what you mean by "duplicate" results, and what is your search, does this happen for just one search or all searches, does it happen for all timeframes or just certain ones?
If you do not specify an index= filter in your search, then Splunk will search your role's default indexes, which can be toggled in the role settings. If you have no default indexes or no data in you... See more...
If you do not specify an index= filter in your search, then Splunk will search your role's default indexes, which can be toggled in the role settings. If you have no default indexes or no data in your default indexes, then no results will appear. The purpose of the cim_Endpoint_indexes macro is to list the indexes from which to find data to populate the data model, so you /should/ be able to list your index filters in there. E.g. index=ABC or index IN (ABC,DEF) The problem is that your Splunk instance is returning a 500 Internal Server Error when you try to edit the macro. In a working system it would not do that. Can you check the web_service.log to see what is causing the problem? If you can access the shell of your splunk search head and it is Linux, then the log should be findable at: /opt/splunk/var/log/splunk/web_service.log
Please explain, with some examples, what is in "statistics", what you mean by "first", "last" and "middle" and how this relates to the two averages that your stats command is returning.
I have this query index=x host=y "searchTerm" | stats Avg(Field1) Avg(Field2) which returns a count of N statistics. I would like modify my query such that (first stats value) statistics[0], ... See more...
I have this query index=x host=y "searchTerm" | stats Avg(Field1) Avg(Field2) which returns a count of N statistics. I would like modify my query such that (first stats value) statistics[0], (middle stats value) ((statistics[0]+statistics[N])/length(statistics)), (final stats value) statistics(N) are returned in the same query I have tried using head and tail but that still limits it to the specified value after 'head' or 'tail'. What other options are available?  
What would cause a command line query ( bin/splunk search "..." ) to return duplicate results over what the UI would return?
I have had success in the past by base-64 encoding the image in an <img> tag and sending the html email. Can you try that? As for formatting the email, you could try writing the email in your favori... See more...
I have had success in the past by base-64 encoding the image in an <img> tag and sending the html email. Can you try that? As for formatting the email, you could try writing the email in your favorite email client, then viewing the source code, then copy-and-pasting it into the SOAR email action html body field. It may still need some tweaking, but most of the formatting should be preserved.
In your dropdown, you seem to have the fieldForLabel and fieldForValue both set to "apps", but in your dynamic query you have used the table command to filter the fields down to only the "APPLICATION... See more...
In your dropdown, you seem to have the fieldForLabel and fieldForValue both set to "apps", but in your dynamic query you have used the table command to filter the fields down to only the "APPLICATION" field. Therefore no results will appear except for the default "All". I recommend changing fieldForLabel and fieldForValue to "APPLICATION".
Have a go at this: index=* "daily.cvd" | dedup host | table host | append [| inputlookup hosts.csv] | stats count by host | where count = 1 | lookup hosts.csv host outputnew host as host_found | eva... See more...
Have a go at this: index=* "daily.cvd" | dedup host | table host | append [| inputlookup hosts.csv] | stats count by host | where count = 1 | lookup hosts.csv host outputnew host as host_found | eval status = if(isnull(host_found),"NEW","MISSING") | table host status Make sure you have a lookup table (hosts.csv) with a single "host" column containing all your expected hosts.
It might be possible to hotwire the app, but you would void the warranty. It would also be worth reading through the terms of use to ensure you are not breaking it. The app itself should, in theory,... See more...
It might be possible to hotwire the app, but you would void the warranty. It would also be worth reading through the terms of use to ensure you are not breaking it. The app itself should, in theory, provide code that makes an API call to the Splunk AI servers. I could not tell you for sure as I have only used the Preview version.
Either two columns as you described, or two columns with machines that SHOULD appear and another column saying Missing if it's not there or New if it's new and unexpected. That way I wouldn't need to... See more...
Either two columns as you described, or two columns with machines that SHOULD appear and another column saying Missing if it's not there or New if it's new and unexpected. That way I wouldn't need to look through them as thoroughly and at a glance be able to see if something is wrong.
If I understand correctly, you would like the final output to be two columns, where one shows the machines that SHOULD appear, and the second shows the machines that DO appear? Then you could see whi... See more...
If I understand correctly, you would like the final output to be two columns, where one shows the machines that SHOULD appear, and the second shows the machines that DO appear? Then you could see which machines are not appearing and therefore need attention? E.g. SHOULD_APPEAR DO_APPEAR host1 host1 host2   host3 host3 ... ...
Hello everyone, I'd like to start out by saying I'm really quite new to Splunk, and we run older versions(6.6.3 and 7.2.3). I'm looking to have a search that will do the following: - Look up the c... See more...
Hello everyone, I'd like to start out by saying I'm really quite new to Splunk, and we run older versions(6.6.3 and 7.2.3). I'm looking to have a search that will do the following: - Look up the current hosts in our system, which I can get with the following search     index=* "daily.cvd" | dedup host | table host      - Then compare to a CSV file that has 1 column with A1 being "host" and then all other entries are the hosts that SHOULD be present/accounted for. -- Using ChatGPT I was able to get something like below which on it's own will properly read the CSV file and output the hosts in it.     | append [ | inputlookup hosts.csv | rename host as known_hosts | stats values(known_hosts) as known_hosts ] | eval source="current" | eval status=if(isnull(mvfind(known_hosts, current_hosts)), "New", "Existing") | eval status=if(isnull(mvfind(current_hosts, known_hosts)), "Missing", status) | mvexpand current_hosts | mvexpand known_hosts | table current_hosts, known_hosts, status     - However when I combine the 2, it will show me 118 results(should only be 59) and there are no results in the "current_hosts" column, and after 59 blank results, the "known_hosts" will then show the correct results from the CSV.     index=* "daily.cvd" | dedup host | table host | append [ | inputlookup hosts.csv | rename host as known_hosts | stats values(known_hosts) as known_hosts ] | eval source="current" | eval status=if(isnull(mvfind(known_hosts, current_hosts)), "New", "Existing") | eval status=if(isnull(mvfind(current_hosts, known_hosts)), "Missing", status) | mvexpand current_hosts | mvexpand known_hosts | table current_hosts, known_hosts, status     I'd love to have any help on this, I'm wouldn't be surprised if ChatGPT is making things more difficult than needed.  Thanks in advance!
This is not a reliable way. If any other host mentions the host we're after, such event will get routed to syslog...