All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have the below event where i tried to extract field ServerA Event: ADMU0509I: The Application Server "serverA" cannot be reached. It appears to be stopped. Query: source="teststatus"| rex m... See more...
Hi, I have the below event where i tried to extract field ServerA Event: ADMU0509I: The Application Server "serverA" cannot be reached. It appears to be stopped. Query: source="teststatus"| rex max_match=100 field=_raw "Server\s"(?P<jvm>.*)"\s*cannot\sbe\s(?P<status>.*)"|table jvm,host Output is showing as "serverA" instead of serverA. I dont want the double quotes, how do i achieve that?  
Hi Splunkers, we have a transaction which runs for every 4hours and usually take 5mins to complete.Im trying to set up an alert to trigger condition if the the transaction run time crosses more than... See more...
Hi Splunkers, we have a transaction which runs for every 4hours and usually take 5mins to complete.Im trying to set up an alert to trigger condition if the the transaction run time crosses more than 5mins.We don't have the privilege to setup real time alerts.So I tried with comparing the transaction start time with systime but not getting desired results and receiving false positives.And I need some setup like whenever the alert is completed within expected time(i.e 5mins) alert should no longer be triggered. Please help in this scenario.Thanks    
Hi I have a dashboard panel that displays (for a given server) 4 statistic values. Backups started, running, successful, failed. The query section of code in my dashboard panel looks like this at t... See more...
Hi I have a dashboard panel that displays (for a given server) 4 statistic values. Backups started, running, successful, failed. The query section of code in my dashboard panel looks like this at the moment... <query> index=myindex sourcetype=bla:linux:syslog host="server.bla.COM" ABCEVENT | stats count(eval(searchmatch("BACKUP AND CPF1124"))) as "Started" count(eval(searchmatch("BACKUP AND CPF1164 AND SUCCESS"))) as "Successful" count(eval(searchmatch("BACKUP AND CPF1164 AND FAILURE"))) as "Failed" | eval Running=if(Started-(Successful+Failed) &gt;= 0, Started-(Successful+Failed), 0) | table Started, Running, Successful, Failed </query> The field Running is a calculated field which works, but not well as it relies on data that may not be unreliable. If time range is 24 hours it's not so bad, but if I view for past 7 days, there is increased chance that an event relating to backup started or backup completed successful or with failure may get missed and not ingested into Splunk. Seems to happen sometimes. Since field Running is not found through search; it's a calculated field, it relies on accuracy. The field calculation is used to imply if a backup is likely running. I want to replace the value that is displayed for field Running with something like the following (based on new data I send to splunk). Idea is to fetch only the last occurence of this event from the past 5 minutes. The event returned will essentially include a count value that I want to extra and use in my panel as a statistic. index=myindex sourcetype=bla:linux:syslog host="server.bla.COM" ABCEVENT *NONSBS earliest=-5m | eventstats max(_time) as maxtime | where _time=maxtime When I do above as a regular Splunk search, I get a single event returned which is perfect. I already created a field extraction which always shows up as an available field in my search results. It's called Jobs_Running. What I would like to do is to replace this ... | eval Running=if(Started-(Successful+Failed) &gt;= 0, Started-(Successful+Failed), 0) With something similar to the above search string, adapted to work within the existing panel, so that I can display the new value for "Running" along side the existing fields "Started", "Successful" and "Failed". Is there a way to do this? One thing I'm not sure about is whether I can pull in the already extracted field (Jobs_Running) that ia visible when I do a regular search or do I need to perform a field extraction on the fly? The expression is: ^(?:[^ \n]* ){9}(?P<Jobs_Running>\d+) To summarise: Started, Successful, Failed are found from search over the time range and counted. Running is calculated on the fly. Now I want to pull in a value from a single (latest) occurence of an event searched from the last 5 minutes, and extract the field value. Thanks
Hi, I have below scenario: kvlookup 1: has list of resolved as well as Unresolved tickets. This has many fields lookup2: has just unresolved tickets( pulled up from a scripted input, This shal... See more...
Hi, I have below scenario: kvlookup 1: has list of resolved as well as Unresolved tickets. This has many fields lookup2: has just unresolved tickets( pulled up from a scripted input, This shall contain tickets part of first lookup and is a subset of it). This has just ticket number and ticket states relevant from kvlookup1 How can I join both of these to have a complete list of accurate unresolved and resolved tickets. lookup1 will always have more records and is a superset of lookup 2 
I have simple search:   index=xyz  logLevel IN (ERROR, INFO) How do I plot two different color in a timespan chart? See attached sample timespan chart. Ideally, I want to show red for error and gre... See more...
I have simple search:   index=xyz  logLevel IN (ERROR, INFO) How do I plot two different color in a timespan chart? See attached sample timespan chart. Ideally, I want to show red for error and green for info on the same time span chart. 
I am using the collect statement to collect a single event to a summary index. When run as a search, it will generate a single row. When run as part of a hidden search in a dashboard, I get multiple ... See more...
I am using the collect statement to collect a single event to a summary index. When run as a search, it will generate a single row. When run as part of a hidden search in a dashboard, I get multiple repeated rows in the summary index. If I show the hidden search in a table in the dashboard, I also get the many rows in the summary, but only one in the shown table in the dashboard. The search is part of a hierarchy of base searches, so the search for the table itself that has the collect statement is one search and there are 5 base searches backing it up. If I press the rerun search icon in the table, I get 8 rows in the summary, but I normally get 5 or 7. Anyone know why this is?  
Hi all, I am new to Splunk and I would like to seek help from the Splunk Community to generate the net power consumption with the following conditions: 1. I have two sets of assets namely A and B, ... See more...
Hi all, I am new to Splunk and I would like to seek help from the Splunk Community to generate the net power consumption with the following conditions: 1. I have two sets of assets namely A and B, which generate a power consumption value. To get the net power consumption (NPC), I will need to subtract the power value of A from B. (NPC=powerB-powerA) 2. The power consumption values are accumulated. To obtain the power consumed by each asset, I subtracted the earliest power value from the latest value. (power=latest-earliest) The problem which I'm facing now is I can't use the same field (power) to generate the power consumption values for asset A and B. I attempted to do a multisearch because I want both my search to run at the same time but the error which I got was "subsearch contains a non-streaming command". Below is my search query:   | multisearch [ | stats latest(Power) as latest_A earliest(Power) as earliest_A by A] [| stats latest(Power) as latest_B earliest(Power) as earliest_B by B]    | eval powerA = latestA - earliestA  | eval powerB = latestB - earliestB | eval NPC =  powerB - powerA   What are the alternatives way or commands which will make my query work? Please help!  
Hi All,  I am planning to upgrade the following Splunk instances from 6.X / 7.X to 8.X and for the same reason, I need to upgrade the underlying OS from 2012 to 2019.   Can you please share yo... See more...
Hi All,  I am planning to upgrade the following Splunk instances from 6.X / 7.X to 8.X and for the same reason, I need to upgrade the underlying OS from 2012 to 2019.   Can you please share your thoughts/links/reference material on planning this exercise? How should I plan the OS upgrade? I currently see two main approaches: 1. Attempt an in-place upgrade of the current server from 2012 to 2019. This is would be without Azure tools and would seem to be an unsupported approach (from Azure anyway). 2. Build an entirely new Azure instance set and install Splunk v8 on it, replicating what we already have in place. Then replace the old v7 Splunk farm in some fashion. Initial thinking is to replace Masters, then search heads, then indexers, then HFs. Any thoughts or recommendations on this would be greatly appreciated. Particularly if you have done this before.      
I am executing a query in splunk which is below :   | makeresults | eval ip="$ip$" | makemv delim="," ip | mvexpand ip | ipinfo ip [ search "10.19.10.10", "%ASA-6-722023", dest="*" | fields dest | ... See more...
I am executing a query in splunk which is below :   | makeresults | eval ip="$ip$" | makemv delim="," ip | mvexpand ip | ipinfo ip [ search "10.19.10.10", "%ASA-6-722023", dest="*" | fields dest | rename dest as ip]   but it is giving me following error 10 errors occurred while the search was executing. Therefore, search results might be incomplete Unrecognized option: ip=103.208.69.136 Unrecognized option: ip=103.226.206.167 Unrecognized option: ip=103.96.43.249 Unrecognized option: ip=106.193.34.105 Unrecognized option: ip=117.221.92.44 Unrecognized option: ip=182.70.78.160 Unrecognized option: ip=27.97.140.72 Unrecognized option: ip=49.36.37.0 Unrecognized option: ip=49.36.43.61 Unrecognized option: ip=68.228.83.221   I have installed IPINFO app on splunk to get the carrier information. 
My logfile is like below, "timestamp": "2021-03-22 22:37:35".  Can someone tell me how to edit the props file correctly ? My current props file doesent have the time parameters listed . DATETIME_C... See more...
My logfile is like below, "timestamp": "2021-03-22 22:37:35".  Can someone tell me how to edit the props file correctly ? My current props file doesent have the time parameters listed . DATETIME_CONFIG = KV_MODE =json # INDEXED EXTRACTIONS =json NO_BINAY_CHECK= true Appreciate the help on this  . Shoud I give TIME_PREFIX= timestamp ? What else parameters required ?
I looked in lookups but did not find them. How do I view / use my Splunk KV store collections?
I've recently begun exploring the FieldSelector command to better understand what fields are the best predictor for an ML model. During my research, I've gained what I think to be a decent understand... See more...
I've recently begun exploring the FieldSelector command to better understand what fields are the best predictor for an ML model. During my research, I've gained what I think to be a decent understanding of what constitutes a good predictor field based largely on its p-value (anything below .05), and the score values (the higher the better). I've been running through some tests and noticed that the fields being selected by the FieldSelector don't represent what I would think to be the most optimal selection of fields. I've pasted the fit command I'm using below: |fit FieldSelector num from PC_* value_hashed_* type=numeric mode=k_best param=10 into combined_field_selector   Once this is run, I compare the output to the summary of the combined_field_selector model, which provides score and p-values for all the fields: | summary combined_field_selector   One of the ten fields selected via FieldSelector was PC_2, with a score of .3293 and a p-value of .5661. Of the 132 fields passed to this fit command, PC_2 ranked 115th in score and was the 15th highest p-value. This seems to tell me it was not a good predictor for the model. Plus, I had more than ten fields with better score/p-value combinations. I know this type of question falls in no man's land between the underlying python, statistical algorithms, and Splunk, but Splunk is really my only means of applying ML to this data and troubleshooting the results. I'm hoping someone has a better understanding of what's going on and can potentially explain why these fields are being selected.
I have diffeence between _time and timestamp in terms of second . ( 5  to 50) . How to make the _time to get the exact value as timestamp ? any quick query will do other than modifying the propd conf... See more...
I have diffeence between _time and timestamp in terms of second . ( 5  to 50) . How to make the _time to get the exact value as timestamp ? any quick query will do other than modifying the propd conf file ? What values in props.conf file should be changed to make this success ?
I do | inputlookup geo_ocean.kmz  for example but get an error. Please advise
So I'm having trouble figuring this one out. Basically for example we have 1000 alarms per day and 100 readers in our office. This would be an average of 10 alarms per reader. My question is how woul... See more...
So I'm having trouble figuring this one out. Basically for example we have 1000 alarms per day and 100 readers in our office. This would be an average of 10 alarms per reader. My question is how would i put that into a search that gets the info for me? I'm fairly new to Splunk but here's what I have. However this returns no results.          index="index" EVDESCR="Alarm" |stats avg(EVDESCR) by READERDESC      
How do I search multiple field values with the "where" command. I am trying to search  multiple field values that are greater than the number zero, and then have them listed in multiple rows within o... See more...
How do I search multiple field values with the "where" command. I am trying to search  multiple field values that are greater than the number zero, and then have them listed in multiple rows within one column. I am not sure of how to format this in the SPL or know if this is even possible with the "where" command.  If you know of a more efficient way of solving this problem with another SPL command please let me know. Also, if there is a better command to use to solve this problem can you please display the correct format of the command for best results. Thank you in advance for your help!!
Hi, I have two identical queries on the dashboard, the only difference - one is based on previously defined search results.  They produce very different charts however, here is the code and screensh... See more...
Hi, I have two identical queries on the dashboard, the only difference - one is based on previously defined search results.  They produce very different charts however, here is the code and screenshots:           <form> <search id="events_search"> <query> index = "*" | fields * </query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <fieldset submitButton="false" autoRun="true"> <input type="time" token="time_token"> <label>Time</label> <default> <earliest>-48h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <title>Errors (Based on events_search query)</title> <search base="events_search"> <query> search level IN ("error", "fatal") | timechart count </query> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <chart> <title>Errors (Not based on any existing query)</title> <search> <query> index = "*" | fields * | search level IN ("error", "fatal") | timechart count </query> <earliest>-48h@h</earliest> <latest>now</latest> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>         So I wonder if it is a bug or some sort of known behavior? 
Not sure that I've picked the correct location - moderators, please move. I found that I cannot normally run a search on index=_internal and get results from my search peers. Any setting to enable i... See more...
Not sure that I've picked the correct location - moderators, please move. I found that I cannot normally run a search on index=_internal and get results from my search peers. Any setting to enable it? Or should I somehow "externalize" the desired data, say, by copying them into a summary index?
Suppose i have 3 column which has Name1, Name2, Host. Name 1 and Name2 could have more that 2 host. Name1 Name2 Host abc aaa 123, 234, 345 cde bbb 456. 333, 444 efg ccc 789, 666, 777 When i select abc... See more...
Suppose i have 3 column which has Name1, Name2, Host. Name 1 and Name2 could have more that 2 host. Name1 Name2 Host abc aaa 123, 234, 345 cde bbb 456. 333, 444 efg ccc 789, 666, 777 When i select abc aaa and in host i have static All with value *. So when i select abc aaa All it displays all host instead of the selected only from the previous dropdown. eg: abc--> aaa--> All. because i have that static options and that All only should display host from those dropdown if I select All but getting all logs of other host as well . Please help
I have a timechart that is sorted by a variable that is a combination of two fields. Like field1_field2. I want to drilldown and send only field2 to the next dashboard as a token. Is it possible to d... See more...
I have a timechart that is sorted by a variable that is a combination of two fields. Like field1_field2. I want to drilldown and send only field2 to the next dashboard as a token. Is it possible to do some sort of regex on the drilldown value?