All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am working on building a query to search retrospectively and potentially run a report. Let's say the first search is index=some_index "inconsistencies" | dedup someField and the second is index... See more...
I am working on building a query to search retrospectively and potentially run a report. Let's say the first search is index=some_index "inconsistencies" | dedup someField and the second is index=some_index "consistent" someField IN (fieldValuesFromPrevMsg) | dedup someField   I want to check whether a field seen in the first search is part of the second search (which has a slightly different query but has same field) in a custom time frame.(could be in the future relative to the first search or in the past) I'm new to splunk, can someone please help me with this?    
Hi @EricLBP   - I’m a Community Moderator in the Splunk Community.  This question was posted 11 years ago, so it might not get the attention you need for your question to be answered. We recommend ... See more...
Hi @EricLBP   - I’m a Community Moderator in the Splunk Community.  This question was posted 11 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi, I am fetching data from service now add on to splunk for one of the service now cmdb table. While fetching the field name is splitting as below  How do i fix this
Hi @maayan, with this search you can list all the alerts | rest splunk_server=local /servicesNS/-/-/saved/searches | where alert_type!="always" | table title and with this search yu can list the ... See more...
Hi @maayan, with this search you can list all the alerts | rest splunk_server=local /servicesNS/-/-/saved/searches | where alert_type!="always" | table title and with this search yu can list the fired alerts index=_audit action="alert_fired" | rename ss_name AS title | join title [ | rest /services/saved/searches | table title, alert_threshold ] | timechart values(alert_threshold) AS alert_threshold count by title Ciao. Giuseppe
Which add-on are you talking about?
Hi, it's unclear from the app description about what this app allows for. Is it helping for radius configuration for splunk authentication ? Or is it for monitoring any radius server logs, even if ... See more...
Hi, it's unclear from the app description about what this app allows for. Is it helping for radius configuration for splunk authentication ? Or is it for monitoring any radius server logs, even if you don't use it within splunk ?
Use _time, then timechart will fill in the blanks for you | eval _time=strptime(TimeStamp, "%F %T") | timechart span=2h count(Name) by machine
Hi, I confirm link is dead and I've not found who to unsubscribe. Is there anyone who can help us ?
thanks! i use TimeStamp and not _time. how do i use it in my query?   | addinfo | fieldformat info_min_time=strftime(info_min_time,"%c") | fieldformat info_max_time=strftime(info_max_time,"%c") ... See more...
thanks! i use TimeStamp and not _time. how do i use it in my query?   | addinfo | fieldformat info_min_time=strftime(info_min_time,"%c") | fieldformat info_max_time=strftime(info_max_time,"%c") | where strptime(TimeStamp,"%F %T.%3N")>info_min_time AND strptime(TimeStamp,"%F %T.%3N")<info_max_time ```Divide the time to intervals ``` | eval TimeStamp_epoch = strptime(TimeStamp, "%F %T") | bin TimeStamp_epoch span=2d | eval interval_start = strftime(TimeStamp_epoch, "%F %T") | eval interval_end = strftime(relative_time(TimeStamp_epoch, "+2d"), "%F %T") | eval interval_end = if(strptime(interval_end, "%F %T") > now(), strftime(now(), "%F %T"), interval_end) | eval time_interval = interval_start . " to " . interval_end | chart count(Name) over machine by time_interval
It appears that two dimensional arrays are not easily handled (unless someone else knows differently), so you could try something like this: | spath output=pointlist path=series{}.pointlist{}{} | mv... See more...
It appears that two dimensional arrays are not easily handled (unless someone else knows differently), so you could try something like this: | spath output=pointlist path=series{}.pointlist{}{} | mvexpand pointlist | table pointlist | streamstats count as row | streamstats count(eval(row % 2==1)) as row | stats list(pointlist) as pointlist by row | sort 0 row | eval pointX = mvindex(pointlist,0) | eval pointY = mvindex(pointlist,1)
see my answer here: https://community.splunk.com/t5/Splunk-Enterprise-Security/threat-intelligence/m-p/673449#M11868 
Hi, Thanks! i will check. i dont have permission to install apps. i wonder if there is an internal query to get all alerts and their results
Should I expect that the threat intelligence that is streaming in is being ran against the events in my environment automatically?  I would not expect that, most vendors don't intergrade with th... See more...
Should I expect that the threat intelligence that is streaming in is being ran against the events in my environment automatically?  I would not expect that, most vendors don't intergrade with the Splunk ES threat intel framework they just make the TI data available in Splunk via a lookup file or by putting it in a index. If you want to be sure the TI info is flowing into the threat intel framework I suggest you add the data there either by revering to the app created lookup (if any), by creating your own lookup from the indexed data or by adding a TAXII/STIX feed. See for more info: Splunk Latern  Splunk Docs  
OK, so have you tried what I suggested?
The query is used in a dashboard panel as a statistical table with single row.  the data is usually  not available on regular intervals. Hence we would like to show the last available data instead of... See more...
The query is used in a dashboard panel as a statistical table with single row.  the data is usually  not available on regular intervals. Hence we would like to show the last available data instead of “no results found” when there is no data for the selected default time range that we have set.
Essentially, data is returned from the selected time range, if there is no data, what time range do you want to use? You could do something like this   <your search> | appendpipe [| stats count ... See more...
Essentially, data is returned from the selected time range, if there is no data, what time range do you want to use? You could do something like this   <your search> | appendpipe [| stats count as _count | where _count = 0 | where isnull(_count) | append [| search <your index> [| metasearch index=<your index> earliest=0 | head 1 | rename _time as earliest | fields earliest] ] ]  
Hi @maayan, did you explored the Alert Manager App (https://splunkbase.splunk.com/app/2665)? Try it, I usually use it when I cannot use ES. Put attention only to one point: the app can see only al... See more...
Hi @maayan, did you explored the Alert Manager App (https://splunkbase.splunk.com/app/2665)? Try it, I usually use it when I cannot use ES. Put attention only to one point: the app can see only alerts with a Global sharing. Ciao. Giuseppe
Hello,  I have seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data.  I would like to create a line chart using pointlist values - it contains tim... See more...
Hello,  I have seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data.  I would like to create a line chart using pointlist values - it contains timestamp in epoch and CPU% Search I tried but not working as expected to extract this data: index="splunk_test" source="test.json" | spath output=pointlist path=series{}.pointlist{}{} | mvexpand pointlist | table pointlist Please see below sample json. {"status": "ok", "res_type": "time_series", "resp_version": 1, "query": "system.cpu.idle{*}", "from_date": 1698796800000, "to_date": 1701388799000, "series": [{"unit": [{"family": "percentage", "id": 17, "name": "percent", "short_name": "%", "plural": "percent", "scale_factor": 1.0}, null], "query_index": 0, "aggr": null, "metric": "system.cpu.idle", "tag_set": [], "expression": "system.cpu.idle{*}", "scope": "*", "interval": 14400, "length": 180, "start": 1698796800000, "end": 1701388799000, "pointlist": [[1698796800000.0, 67.48220718526889], [1698811200000.0, 67.15981521730248], [1698825600000.0, 67.07217666403122], [1698840000000.0, 64.72434584884627], [1698854400000.0, 64.0411289094932], [1698868800000.0, 64.17585938553243], [1698883200000.0, 64.044969119166], [1698897600000.0, 63.448143595246194], [1698912000000.0, 63.80226399404451], [1698926400000.0, 63.93216493520908], [1698940800000.0, 63.983679174088145], [1701331200000.0, 63.3783379315815], [1701345600000.0, 63.45321248782884], [1701360000000.0, 63.452383398041064], [1701374400000.0, 63.46314971048991]], "display_name": "system.cpu.idle", "attributes": {}}], "values": [], "times": [], "message": "", "group_by": []} can you please help how I can achieve this? Thank you. Regards, Madhav
| timechart span=2h count(Name) by machine
Hi, i need to find a way to present all alerts in a dashboard(Classic/Studio). users don't want to get mail for each alert, they prefer to see (maybe in a table ) all the alerts in one page + the al... See more...
Hi, i need to find a way to present all alerts in a dashboard(Classic/Studio). users don't want to get mail for each alert, they prefer to see (maybe in a table ) all the alerts in one page + the alert's last result. and maybe to click on the alert and get the last search. is it possible to create an alerts dashboard? thanks, Maayan