All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @varshini_3141, are there other messages, near the one you shared? which operative systema are you running in? Anyway, the fastest solution is to open a case to Splunk Support (sending them a d... See more...
Hi @varshini_3141, are there other messages, near the one you shared? which operative systema are you running in? Anyway, the fastest solution is to open a case to Splunk Support (sending them a diag that they can use to analyze your system). Ciao. Giuseppe
Hi @ITWhisperer , I will give some sample data like this. In my events i have data Identity, Test_ID, Test_Data and Test_Status. I want to find maximum Test_ID for given Test_Data and then show a ta... See more...
Hi @ITWhisperer , I will give some sample data like this. In my events i have data Identity, Test_ID, Test_Data and Test_Status. I want to find maximum Test_ID for given Test_Data and then show a table with all the above fields only for the maximum Test_ID. First i used eventstats to get max Test_ID, then i am assigning it to Test_ID and then i am creating a table. Is is the correct way? or should i have to do anything else?
We have a splunk forwarder installed in a server where the logs were pushed to splunk cloud.  Without any restart or any interruption, the splunk service has stopped Found the Below Log in uf. WARN... See more...
We have a splunk forwarder installed in a server where the logs were pushed to splunk cloud.  Without any restart or any interruption, the splunk service has stopped Found the Below Log in uf. WARN DispatchReaper [ DispatchReaper] - Received shutdown signal during startup reaping and did not complete all reaping tasks. Reaping will be performed upon next startup. There are no other logs related to the shutdown of splunk service. Any idea what could be the reason for the service shutdown?
@ITWhisperer and @bowesmana gave some good ideas for obtaining streams of events.  I want to note that " if event has "PROD*" in field name I need to get the value" can have different meanings depend... See more...
@ITWhisperer and @bowesmana gave some good ideas for obtaining streams of events.  I want to note that " if event has "PROD*" in field name I need to get the value" can have different meanings depending on what you want to do with the keys and values.  If all you want is to list all values of each key, it can be as simple as   index=myIndex sourcetype=mySourceType | stats values(Prod*) as Prod*  
Hello Experts, We are using AppDynamics On-prem version 23.1.3-66. Is there any best practice to exclude App, Machine Agents installation directory from the Antivirus scan?  If yes then also pr... See more...
Hello Experts, We are using AppDynamics On-prem version 23.1.3-66. Is there any best practice to exclude App, Machine Agents installation directory from the Antivirus scan?  If yes then also provide the AppD documentation link, thanks.
One additional optimisation that may be possible in your case... You are expecting 13m events from 24m that satisfy the search criteria, so you want to totally ignore 11m events if possible, so they... See more...
One additional optimisation that may be possible in your case... You are expecting 13m events from 24m that satisfy the search criteria, so you want to totally ignore 11m events if possible, so they are never even scanned. If you look at the job properties in the job inspector you will see scanCount and eventCount. One key way to improve performance is to reduce the scanCount, where the indexes go look at the raw event data to find if your search matches. This can be done using the TERM(x) directive, where x is a piece of data that is a TERM, i.e. surrounded by major breakers in the data, so that it will be recorded in Splunk's tsidx files. When you use the TERM(x) directive, it will search the tsidx files for the given term and if not found will not even look at the bucket raw data for that term. If your search criteria have constraints that can be converted to TERM directives, try that. There's a really good talk about this topic from conf 20 here https://conf.splunk.com/files/2020/slides/PLA1089C.pdf  
This is great, and long story short for your two qualifiers: Yes to both two (#1 and #2). I was indeed using a combined search as well. Now for tstats, I really like your idea. The concern I had is,... See more...
This is great, and long story short for your two qualifiers: Yes to both two (#1 and #2). I was indeed using a combined search as well. Now for tstats, I really like your idea. The concern I had is, let's say I do have a sourcetype_1 with over 1,000,000 unique sourcetype_1_primary keys. This sourcetype is also incremental, so any "net-new" changes for any of the 1,000,000 primary keys are dumped into Splunk once every 24 hours and not all of the 1,000,000 keys are not updated every day. My rule of thumb is to look back a maximum of 30 days to catch all the changes and use stats latest() to create the latest data for each of the 1,000,000 primary keys. So with your tstats example, it seems to only work for sourcetypes with full data dumps each day if the specific length between latest and earliest is known, instead of incremental sourcetypes. Else, I could have set earliest=-24h and be done with it. It's actually kind of ironic knowing how Splunk searches work with timeframes. Assuming you're searching with 'earliest' time modifier and latest is now(), Splunk does search backwards from now() to the earliest. In other words, searches backwards from latest to earliest. You can see the Splunk search working backwards in real time by observing the 'Timeline' under the ad-hoc search pane. With my understanding that Splunk does search backwards, I just wish there's a way which when Splunk is doing the index searches, there's a way to tell Splunk to just keep only the latest event of each unique value of a field. For example: When doing Index searches, tell Splunk to keep only the first occurring event of each unique value in the field sourcetype_1_primary. Splunk is to ignore any subsequent duplicate values as Splunk continues to search backwards. Edit: I'm not describing streamstats command aren't I? Edit2: I converted my stats latest() to streamstats latest() and did not see improvements. Additionally, streamstats appear to break the ability to do stats join when switching it from stats values() to streamstats values(). Appears streamstats work correctly only for latest() but not when joining data.  
If the field is named _time then Splunk will format it automatically. index=dog sourcetype=cat earliest=-30d [| inputlookup LU1_siem_set_list where f_id=*$pick_f_id$* | stats values(mc) as search... See more...
If the field is named _time then Splunk will format it automatically. index=dog sourcetype=cat earliest=-30d [| inputlookup LU1_siem_set_list where f_id=*$pick_f_id$* | stats values(mc) as search | eval search="mc=".mvjoin(search," OR mc=")] | stats latest(_time) as _time by ip Otherwise, you can use the convert command to format it. index=dog sourcetype=cat earliest=-30d [| inputlookup LU1_siem_set_list where f_id=*$pick_f_id$* | stats values(mc) as search | eval search="mc=".mvjoin(search," OR mc=")] | stats latest(_time) by ip | convert ctime('latest(_time)')
I have a search that looks like this:  index=dog sourcetype=cat earliest=-30d [| inputlookup LU1_siem_set_list where f_id=*$pick_f_id$* | stats values(mc) as search | eval search="mc=".mvjoin(searc... See more...
I have a search that looks like this:  index=dog sourcetype=cat earliest=-30d [| inputlookup LU1_siem_set_list where f_id=*$pick_f_id$* | stats values(mc) as search | eval search="mc=".mvjoin(search," OR mc=")] | stats latest(_time) by ip. what i see is : mc latest(_time) 00.00.01 1715477192 00.00.02 1715477192 00.00.03 1715477192 how to present this in a dashboard with time formatted. Thanks!                               
What @ITWhisperer , but I suspect your problem is that you have  client_ip_address earliest latest in your initial search term, which I am guessing corresponds to ip earliest latest in your lookup.... See more...
What @ITWhisperer , but I suspect your problem is that you have  client_ip_address earliest latest in your initial search term, which I am guessing corresponds to ip earliest latest in your lookup. If your data contains a field called ip and that is what you are calling client_ip_address, then remove client_ip_address also from your search. If your data contains a field called client_ip_address and that is supposed to be a match for the ip in the lookup, then in your subsearch rename ip as client_ip_address.
Also, if you have lots of events that do NOT have fields called PROD anything and your event data must have PROD as a term in the data, then you can help filter only those events that have PROD in th... See more...
Also, if you have lots of events that do NOT have fields called PROD anything and your event data must have PROD as a term in the data, then you can help filter only those events that have PROD in the data with TERM(PROD) index=myIndex sourcetype=mySourceType TERM(PROD)  
to be clear here is what i'm getting: src_ip                     _time                               values(src)                      values(src_f_id)                                                ... See more...
to be clear here is what i'm getting: src_ip                     _time                               values(src)                      values(src_f_id)                                                                        01.00.00                 2024-04-10                  abcd1                                  OS-0030 02.00.00                  2024-04-10                  abcd2                                  OS-0030 03.00.00                   2024-04-10                 abcd3                                    OS-0030 So this is what I see on my end, what I'm trying to do is to present these in a nice dashboard. Thanks! 
It is not clear what it is you are trying to visualise - by using values(*) you will get a series of multivalue fields - how are you trying to visualise these?
The search works, but I'm not able to put a chart even I have 7 statistics the only Splunk visualization I get to work is Histogram chart which is weird. Any idea why? Could it be because I have the ... See more...
The search works, but I'm not able to put a chart even I have 7 statistics the only Splunk visualization I get to work is Histogram chart which is weird. Any idea why? Could it be because I have the exact same _time and values except for values(src) and src_ip are different. Thanks!
I am not sure what the problem is if it works! Having said that, I am not sure what the earliest and latest are doing on the index line. Try something like this index=myindex client_ip_address [| in... See more...
I am not sure what the problem is if it works! Having said that, I am not sure what the earliest and latest are doing on the index line. Try something like this index=myindex client_ip_address [| inputlookup ip_list_2.csv | eval ip = "*" . 'Extracted IP' . "*" | eval earliest=strptime('REQUEST_TIME', "%m/%d/%y %H:%M")-(60*60) | eval latest=strptime('REQUEST_TIME', "%m/%d/%y %H:%M")+(60*60) | fields ip earliest latest ] The subsearch becomes a series of (ip=value AND earliest=value AND latest=value) joined by ORs which is what you appear to want. Or am I missing something?
Try something like this | foreach PROD* [| eval keep=if(isnull(keep) AND isnotnull('<<FIELD>>'), 1, keep)] | where keep==1
The way to do it is to split the number up and do your lookup using those fields in a single lookup passing all three fields | rex field=number "\+(?<cc>\d)(?<npa>\d\d\d)(?<nxx>\d+)" | lookup looku... See more...
The way to do it is to split the number up and do your lookup using those fields in a single lookup passing all three fields | rex field=number "\+(?<cc>\d)(?<npa>\d\d\d)(?<nxx>\d+)" | lookup lookupfile.csv cc npa nxx  
If asset is unique in your lookup, you could do this (the format command will put in the "OR"s between rows) Index=a sourcetype=b earliest=-30d [|inputlookup LU0_siem_asset_list where f_id=*OS-03* |... See more...
If asset is unique in your lookup, you could do this (the format command will put in the "OR"s between rows) Index=a sourcetype=b earliest=-30d [|inputlookup LU0_siem_asset_list where f_id=*OS-03* | rename asset as search | table search | format] | fields src src_ip src_f_id _time | stats latest(_time) as _time values(*) by src_ip. | fieldformat _time=strftime(_time, "%Y-%m-%d %H:%M:%S") This is just an example of a format you could use For more details on the option, see the documentation https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Commontimeformatvariables  
The CommandLine example you have shown does not match the lookup wildcard string you have shown so it is not surprising that you don't get any results returned from the lookup. Also, if the commands... See more...
The CommandLine example you have shown does not match the lookup wildcard string you have shown so it is not surprising that you don't get any results returned from the lookup. Also, if the commands lookup field already contains leading and trailing * there should be no need to add them to the CommandLine filter in the subsearch.
It sounds like either client or apiName hasn't been extracted - can you check e.g. index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex fie... See more...
It sounds like either client or apiName hasn't been extracted - can you check e.g. index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex field=message "Message=.* \((?<apiName>\w+?) -" | stats count by client or index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex field=message "Message=.* \((?<apiName>\w+?) -" | stats count by apiName