All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Our system has a lot of Reports defined and I'm tasked with cleaning them up. The first thing I want to do is determine when each was last used. I found some searches that are supposed to help, but t... See more...
Our system has a lot of Reports defined and I'm tasked with cleaning them up. The first thing I want to do is determine when each was last used. I found some searches that are supposed to help, but they are too old or something, results are invalid (e.g. I am getting back Alerts and Searches when I want only Reports). Out of 199 Reports 7 are scheduled so I can guess when they ran last. Can someone show me a search that returns Reports each with their last run date?  thanks!
I dunno.  I have somewhat of the same issue.  A search result shows while its searching and will stay if lower than a certain number of days but then disapears when the search completes over a number... See more...
I dunno.  I have somewhat of the same issue.  A search result shows while its searching and will stay if lower than a certain number of days but then disapears when the search completes over a number of days that is not consistant.  So seems related to the length of time of the search.  My search has no dedup in it.
This seemed to do what you are asking about if I understood you question correctly. | sendemail to="<email>" message="this is a test message: testing token $$token$$"    
Should be able to check index=_audit for search run times for each individual search running on dashboards. Something like this. index="_audit" action=search provenance="UI:Dashboard:*" |... See more...
Should be able to check index=_audit for search run times for each individual search running on dashboards. Something like this. index="_audit" action=search provenance="UI:Dashboard:*" | table _time, user, app, provenance, savedsearch_name, search_id, total_run_time, event_count, result_count, search_et, search_lt | eval dashboard_id=mvindex(split(provenance, ":"), 2, -1)
Hi, is there a way we can check the dashboard load time ? for example, if i choose today timestamp and hit the submit. how long it takes the panels to return the data for today timestamp? Thanks, ... See more...
Hi, is there a way we can check the dashboard load time ? for example, if i choose today timestamp and hit the submit. how long it takes the panels to return the data for today timestamp? Thanks, Selvam.
hi @muradgh i’m having the same issue on my fortigate logs using TCP but we’re using splunk cloud so modifying the props.conf file i think is not a straightforward task for us to do so i’m planning t... See more...
hi @muradgh i’m having the same issue on my fortigate logs using TCP but we’re using splunk cloud so modifying the props.conf file i think is not a straightforward task for us to do so i’m planning to use UDP instead..  are you able to share with me your syslog-ng.conf for fortigate logging if that’s ok with you? i also need inputs on setting up the correct filters to make the raw output readable and one line per event. did you also set the log format on fortigate firewall to use rfc 5424 when sending to syslog-ng? thank you in advance!
Thanks - that is a lot more detailed than my solution and I like the intersection - that will be useful for me to help people know what was in there - we often have hundreds of keys returned and to s... See more...
Thanks - that is a lot more detailed than my solution and I like the intersection - that will be useful for me to help people know what was in there - we often have hundreds of keys returned and to see which ones were retuned is really useful.    Thanks,  Steven
And I have managed to solve it.. should have fetched a coffee before posting I guess.  So just needed to add a |Search and IN after the |Split index="PreProduction" source="Transactions"  | eval K... See more...
And I have managed to solve it.. should have fetched a coffee before posting I guess.  So just needed to add a |Search and IN after the |Split index="PreProduction" source="Transactions"  | eval KeysSplit=split(Keys,", ") | search PKSSplit IN($ObjectRefs$) I can then |table my results.  Hopefully this may be useful to someone else. 
Doing some SPL like this may lead you in the right direction if I am understanding you question correctly. Note: The top portion of this code is just generating sampe data, the meat of the solution ... See more...
Doing some SPL like this may lead you in the right direction if I am understanding you question correctly. Note: The top portion of this code is just generating sampe data, the meat of the solution is where the comments start ``` <comment> ``` | makeresults | eval input_value="83, 9123, 272529, 1234" | append [ | makeresults | eval input_value="851056, 714062, 6234, 91258,272476" ] | append [ | makeresults | eval input_value="28, 10001, 18, 99923,1027385" ] ``` Generating field with the comma delimited list of Keys ``` | eval Keys="272476, 272529, 274669, 714062, 714273, 845143, 851056, 853957, 855183" ``` Splitting both Keys and simulated user input fields into multivalued fields ``` | eval mv_Keys=trim(split(Keys, ","), " "), mv_input_value=trim(split(input_value, ","), " ") ``` looping through each entry in a multivalue field 'mv_input_value' and checking if it exists in the list of Keys ``` | eval intersecting_keys=case( isnull(mv_input_value), null(), mvcount(mv_input_value)==1, if('mv_input_value'=='mv_Keys', 'mv_input_value', null()), mvcount(mv_input_value)>1, mvmap(mv_input_value, if('mv_input_value'=='mv_Keys', 'mv_input_value', null())) )   Results show in the screenshot You can split the comma delimited lists into MV fields and then loop through one of them to individually check if that number exists in another multivalued field. In this example I did this and created a new field 'intersecting_keys' to return the number that exist in both fields. 
It's good app but not good enough   Missing few additional fields.  For example: Parent_Process_Label (at least). <<< always Parent_Process_PID is "folder name".   
I'm sending $phrase$ in an email notification but they don't make it through because Splunk assumes they are variables. Is there a way to send these without Splunk recognizing them as a variable?  T... See more...
I'm sending $phrase$ in an email notification but they don't make it through because Splunk assumes they are variables. Is there a way to send these without Splunk recognizing them as a variable?  Thanks
Hi, we encountered the same issue after upgrading Splunk ES to 7.2.0. I am kindly asking to be more detailed by what do you mean by : I removed the stanza from the default folder, (which file in t... See more...
Hi, we encountered the same issue after upgrading Splunk ES to 7.2.0. I am kindly asking to be more detailed by what do you mean by : I removed the stanza from the default folder, (which file in the default folder?) I added a stanza with disabled = 1 in local folder, (again, in which file you added the stanza?) Also, are you referring to this recommendation (Ref: hxxps://docs.splunk.com/Documentation/ES/7.2.0/RN/KnownIssues )? Add the following comment at the end of the file. Conf File Check for Bias Language [confcheck_es_bias_language_cleanup://default] debug = <boolean>
I have an index set up that holds a number of fields, one of which is a comma separated list of reference numbers and I need to be able to search within this field via a dashboard. This is fine for ... See more...
I have an index set up that holds a number of fields, one of which is a comma separated list of reference numbers and I need to be able to search within this field via a dashboard. This is fine for a single reference as we can just search within the field and on the parameter on the dashboard prefix/suffix with wildcards but for multiple values, which can be significant, I can not see a way of searching While I have looked at |split and In neither seem to provide what I need though that may be down to what I tried.  Example data:  Keys="272476, 272529, 274669, 714062, 714273, 845143, 851056, 853957, 855183" I need to be able to enter in any number of keys, in any order, and find any records that contain ANY of the keys - not all of them in a set order. So for the above it should return if I search for (853957) or (855183,  714062) or (272476, 714062, 855183) Is anyone able to point me towards a logical solution on this - it will be a key aspect of our use of SPLUNK to enable users to copy/paste a list of reference numbers and assess where these occur in our logs. 
Not sure if this is exactly what you are looking for but I think it is pretty close. I got this output by stringing together a couple of streamstats with window=<int> and reset_before=<cri... See more...
Not sure if this is exactly what you are looking for but I think it is pretty close. I got this output by stringing together a couple of streamstats with window=<int> and reset_before=<criteria> parameters | sort 0 +Machine, +time | streamstats count as row | eval TimeStamp=strftime(time, "%m/%d/%Y %H:%M:%S") | fields - _time | fields + row, Machine, TimeStamp, time | streamstats window=3 count as running_count, min(time) as min_time, max(time) as max_time by Machine | eval seconds_diff='time'-'min_time', duration_diff=tostring(seconds_diff, "duration") | streamstats window=3 reset_before="("seconds_diff>300")" count as running_count by Machine | eval Occurrence=if( 'seconds_diff'<=300 AND 'running_count'==3, "TRUE", "FALSE" ) | fields + row, Machine, TimeStamp, Occurrence   Here is the full SPL I used to generate the screenshot (results may vary because of the use of relative_time()) | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h@s") | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h+18s@s") ] | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h+34s@s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d@d+20h@h+31m@m+48s@s") ] | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h+52s@s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+5m+2s") ] | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h+302s@s") ] | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "+2d-5h+18s@s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+18s@s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+2m+1s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+2m+34s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+4m-12s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d@d+20h@h+43m@m+5s@s") ] | sort 0 +Machine, +time | streamstats count as row | eval TimeStamp=strftime(time, "%m/%d/%Y %H:%M:%S") | fields - _time | fields + row, Machine, TimeStamp, time | streamstats window=3 count as running_count, min(time) as min_time, max(time) as max_time by Machine | eval seconds_diff='time'-'min_time', duration_diff=tostring(seconds_diff, "duration") | streamstats window=3 reset_before="("seconds_diff>300")" count as running_count by Machine | eval Occurrence=if( 'seconds_diff'<=300 AND 'running_count'==3, "TRUE", "FALSE" ) | fields + row, Machine, TimeStamp, Occurrence
Hello all, I'm writing my first Modular Input app, and I'm wondering what's the best way to store a REST API key for my python script? I've seen mention that the key can be stored within splunk and ... See more...
Hello all, I'm writing my first Modular Input app, and I'm wondering what's the best way to store a REST API key for my python script? I've seen mention that the key can be stored within splunk and retrieved by the script, but no solid explanation on how to do that. Can anyone provide a secure method?   Thank you
It is not clear why row 5 should be true since you haven't shared the data (number of errors in each event). Having said that, are you trying to implement a sliding 5 minute window, or are you using... See more...
It is not clear why row 5 should be true since you haven't shared the data (number of errors in each event). Having said that, are you trying to implement a sliding 5 minute window, or are you using time bins? If you are using time bins, the row 5 is in a different bin to rows 1-4.
Additionally, you can use one of several apps implementing such source tracking. For example - https://splunkbase.splunk.com/app/4621 On the other hand, you can use Forwarder Monitoring in Monitorin... See more...
Additionally, you can use one of several apps implementing such source tracking. For example - https://splunkbase.splunk.com/app/4621 On the other hand, you can use Forwarder Monitoring in Monitoring Console to see "lost" forwarders (but this relies on _internal logs from the forwarder, not on the actual "production" events forwarder from given UF)
Hi @subasm, I'm quite sure that the issue is in the data. Open a case to Splunk Support to be sure. Ciao. Giuseppe
Hi @maede_yavari, the best approach is having a lookup (called e.g. perimeter.csv) containing the lista of all UFs to monitor (at least one column: host). Then you could run (e.g. every 15 minutes)... See more...
Hi @maede_yavari, the best approach is having a lookup (called e.g. perimeter.csv) containing the lista of all UFs to monitor (at least one column: host). Then you could run (e.g. every 15 minutes) a search like this: | tstats count WHERE index=_internal BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0  If you don't wnt to have this lookup, you could use this search to run every 15 minutes: | tstats count WHERE index=_internal earliest=-30d latest=now BY host _time | eval period=if(_time<now()-900,"Previus","Last") | stats dc(period) AS period_count values(period) AS period BY host | where period_count=1 AND period="Previus" this second solution has the advantage that you don't need to maintain the lookup but gives you less control because you don't check servers that aren't sending logs from 30 days and it's more heavy. Ciao. Giuseppe
Hi, I have installed Splunk Universal Forwarder on several Windows servers, and they send their Windows logs to the indexers. All Windows logs are saved in the 'windows-index.' However, sometimes, ... See more...
Hi, I have installed Splunk Universal Forwarder on several Windows servers, and they send their Windows logs to the indexers. All Windows logs are saved in the 'windows-index.' However, sometimes, some of the Universal Forwarders are disconnected, and I have no logs from them in a period of time. How can I find which Universal Forwarders are disconnected? I must mention that the number of UFs is more than 400.