All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @tsocyberoperati , the @PickleRick 's hint is correct, but you can use this approach finding a correct regex to identify the hosts. Ciao. Giuseppe
Thank you for the information.  So, if the upgraded version is in compatible with the Cribl workers then can we go for upgrade even though the final downstream cloud components runs with lower version?
For example I have a link to a specific trace:  https://xxxx.signalfx.com/#/apm/traces/2459682daf1fe95db9bbff2042a1ec0e This for example will show me all the trace water fall from the beggining of t... See more...
For example I have a link to a specific trace:  https://xxxx.signalfx.com/#/apm/traces/2459682daf1fe95db9bbff2042a1ec0e This for example will show me all the trace water fall from the beggining of the trace. Now, I want to be able to access this trace from a specific start_time and see till end_time. Is it possible? If yes, what should be the correct link?
This should be fast enough | tstats max(_time) AS _time WHERE index=* BY host | where relative_time(now(), "-30d") > _time | reltime | rename reltime as since_last_update | eval last_update_time = s... See more...
This should be fast enough | tstats max(_time) AS _time WHERE index=* BY host | where relative_time(now(), "-30d") > _time | reltime | rename reltime as since_last_update | eval last_update_time = strftime(_time, "%F %T")
ChatGPT is perhaps the last place you want to learn SPL from.  The task is relative straightforward. index=* "daily.cvd" | fields host ``` only needed if sources have too many fields ``` | eval sour... See more...
ChatGPT is perhaps the last place you want to learn SPL from.  The task is relative straightforward. index=* "daily.cvd" | fields host ``` only needed if sources have too many fields ``` | eval source = "INDEX" | append [inputlookup hosts.csv | eval source = "CSV"] | stats values(source) as source by host | eval status = case(mvcount(source) > 1, null(), source == "CSV", "Missing", true(), "New") | fields - source
@gcusello I ended up taking an entirely different approach. I ditched inputlookup/lookup and used a bit of eval, where and eventstats to achieve it. For your suggestion to use summary index, I do not... See more...
@gcusello I ended up taking an entirely different approach. I ditched inputlookup/lookup and used a bit of eval, where and eventstats to achieve it. For your suggestion to use summary index, I do not have privileges to create a new index, so couldn't try that but it would have worked i guess. Thank you though,  I can definitely keep this approach in mind whenever I run into problems again. 
First, unless you have prior knowledge that number of Hostname in index A is always larger than that in index B in any search period, "missing" simply means that the name appears only in one index.  ... See more...
First, unless you have prior knowledge that number of Hostname in index A is always larger than that in index B in any search period, "missing" simply means that the name appears only in one index.  The following does not try to address this problem, but will give you what you want, and is much simpler, perhaps more performant. (index=A sourcetype="Any") OR (index=B sourcetype="foo") | eval Hostname=coalesce(lower(Hostname), lower(Reporting_Host)) | fields index Hostname operating_system device_type | stats values(*) as * by Hostname | eval match=if(mvcount(index) == 1, "missing", "ok") Not only operating system and device type, you can add any other fields of interest that may only exist in one of indices.
How to fix"Could not load lookup=LOOKUP-autolookup_prices"
Simple query, actually. index=ourindex earliest=epoch1 latest=epoch2 Command line query returns 16 events, same query in UI returns 8 events. The 16 events have 8 duplicates. Running 9.1.1 Splunk ... See more...
Simple query, actually. index=ourindex earliest=epoch1 latest=epoch2 Command line query returns 16 events, same query in UI returns 8 events. The 16 events have 8 duplicates. Running 9.1.1 Splunk Enterprise. Search head cluster.
thanks @dural_yyz . I was thinking of a solution where for a specific token I could enable HTTP protocol. I infer based on your comment and @jawahir007 comment, I infer that its a global setting and ... See more...
thanks @dural_yyz . I was thinking of a solution where for a specific token I could enable HTTP protocol. I infer based on your comment and @jawahir007 comment, I infer that its a global setting and cannot be changed for a specific token. I wonder why Splunk recommends to use HTTP for performance optimisation(referring to below statement from listed ref link). Sending data over HTTP results in a significant performance improvement compared to sending data over HTTPS. Troubleshoot HTTP Event Collector - Splunk Documentation
Based on your search, I believe that index metrics is not a metrics index, but an event index.  Is this correct?  The fundamental idea to meet your idea is to not use timechart before we can detect c... See more...
Based on your search, I believe that index metrics is not a metrics index, but an event index.  Is this correct?  The fundamental idea to meet your idea is to not use timechart before we can detect change.  Here is an alternative if you have to use index search.   index=metrics host=* | rex field=host "^(?<host>[\w\d-]+)\." | lookup dns.csv sd_hostname AS host | bin _time span=1s | stats rate(Query) as QPS by Site _time | bin _time span=5m | stats avg(QPS) as QPS by Site _time | streamstats window=2 global=false current=true stdev(QPS) as devF by Site | sort Site, - _time | streamstats window=2 global=false current=true stdev(QPS) as devB by Site | where 4*devF > QPS OR 4*devB > QPS | timechart span=5m values(QPS) by Site   To get to this step, I have to run timebucket twice to get 5-min average of QPS. (When you run per_second in timechart span=5m, I suspect it gives you an average of sorts.) I ran this against an emulation that you can also run,   index = _audit earliest=-0d@d latest=-0d@d+1h | rename action as Site | streamstats count as Query by Site ``` the above emulates index=metrics host=* | rex field=host "^(?<host>[\w\d-]+)\." | lookup dns.csv sd_hostname AS host ```   The caveat is that if a Site has no query in some 5-minute intervals then have a significant change later, you won't get connected points on the graph.
So this is a global setting and I cannot choose protocol per token is it?
@martialartjesse Glad to know it helped someone else. Send over some karma if you don't mind. Thanks.
Perfect. Thank you @ITWhisperer, your solution worked.
| streamstats count as row | eventstats max(row) as total | where row = 1 OR row = total OR row = floor(total / 2)
Year and a half later but just wanted to let you know this solution saved my bacon, thanks!
Thanks @ITWhisperer . So in the screenshot below, we see that 170 statistics were returned. I would like the query to return the 1st, the 85th/86th, and the 170th statistic, instead of all 170 of the... See more...
Thanks @ITWhisperer . So in the screenshot below, we see that 170 statistics were returned. I would like the query to return the 1st, the 85th/86th, and the 170th statistic, instead of all 170 of them. Is there a way to accomplish this? sample
At first glance it does not seem that that SC4S_IMAGE exists or is accessible. If you try to docker pull it, it says it either does not exist or needs credentials.  Could you check the journalctl lo... See more...
At first glance it does not seem that that SC4S_IMAGE exists or is accessible. If you try to docker pull it, it says it either does not exist or needs credentials.  Could you check the journalctl logs for the service to see if there are errors or notes around that error which would add context to it? sudo journalctl -u sc4s.service
Please explain what you mean by "duplicate" results, and what is your search, does this happen for just one search or all searches, does it happen for all timeframes or just certain ones?
If you do not specify an index= filter in your search, then Splunk will search your role's default indexes, which can be toggled in the role settings. If you have no default indexes or no data in you... See more...
If you do not specify an index= filter in your search, then Splunk will search your role's default indexes, which can be toggled in the role settings. If you have no default indexes or no data in your default indexes, then no results will appear. The purpose of the cim_Endpoint_indexes macro is to list the indexes from which to find data to populate the data model, so you /should/ be able to list your index filters in there. E.g. index=ABC or index IN (ABC,DEF) The problem is that your Splunk instance is returning a 500 Internal Server Error when you try to edit the macro. In a working system it would not do that. Can you check the web_service.log to see what is causing the problem? If you can access the shell of your splunk search head and it is Linux, then the log should be findable at: /opt/splunk/var/log/splunk/web_service.log