All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is such a vague question and there is such little information... 1. Do you see events in other indexes but not in this one or you cannot find any events anywhere? 2. Were there any changes don... See more...
This is such a vague question and there is such little information... 1. Do you see events in other indexes but not in this one or you cannot find any events anywhere? 2. Were there any changes done lately to the environment? 3. Are you ingesting any data at all? Or was it just a "static" environment. In such case the data might have simply rolled over to frozen (got deleted) due to exceeding retention period. As @gcusello mentioned, KV-store problems don't have much with having the events or not. They can cause other issues but they are not responsible for data suddenly disappearing from indexes.
Supposedly Splunk can support such time resolution but I haven't found any official info on that. You can always test it by defining time parsing rule with nanosecond precission and ingesting non-mon... See more...
Supposedly Splunk can support such time resolution but I haven't found any official info on that. You can always test it by defining time parsing rule with nanosecond precission and ingesting non-monotonically timed events differing at - for example - nanosecond level. If you can later do your search sorted by _time (regardless of it's the default reverse chronological order or an explicit sort command), that would mean it works properly. Otherwise it would mean that either Splunk doesn't store time with such precision or at least doesn't use it for practical purposes.
Hi @MattiaP, Did you validate if your license is active? If no logs are being shown, it could be related to your license. Kind regards, Rafael Santos
As @PickleRick replied, you can avoid this just by using the EVAL or applying filters to look for everything different from null or blank. You can also, create a field extraction using Regex to avoi... See more...
As @PickleRick replied, you can avoid this just by using the EVAL or applying filters to look for everything different from null or blank. You can also, create a field extraction using Regex to avoid situations like this, for example: | rex field=_raw cs_username="(?<cs_username>.+?)\"\s    https://regex101.com/r/f6booK/1
Hi @ITWhisperer , Below is my query which returns 250+ events sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>\”?[\w-]+\”?)” | stats value... See more...
Hi @ITWhisperer , Below is my query which returns 250+ events sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>\”?[\w-]+\”?)” | stats values(request_id) as request_ids | eval request_ids = "\"" . mvjoin(request_ids, "\" OR \"") . "\"" | eval request_ids= replace(request_ids,"^request_id=","") | format
Thank you for your suggestions, As per your suggestions we have changed the SQL quiry. After changes results showing it's still "Winows_Support - Operations" group.  Can you please help me here.
Hi @gcusello ,  thank you for answering. Index stopped working when I had problems with MongoDB so I tought it was correlated. What should I expect to find from inputs.conf? Sorry, I'm a beginner. ... See more...
Hi @gcusello ,  thank you for answering. Index stopped working when I had problems with MongoDB so I tought it was correlated. What should I expect to find from inputs.conf? Sorry, I'm a beginner. ciao, Mattia
Hi @jyates76, you have to extract the down duratio and then run a simple search: index=your_index "was down for" | rex "was\s+down\s+for\s+(?<hours>\d+)hr:(?<minutes>\d+)min:(?<seconds>\d+)sec" | e... See more...
Hi @jyates76, you have to extract the down duratio and then run a simple search: index=your_index "was down for" | rex "was\s+down\s+for\s+(?<hours>\d+)hr:(?<minutes>\d+)min:(?<seconds>\d+)sec" | eval duration=hours*3600+minutes*60+seconds | timechart perc90(duration) BY host You can test the regex at https://regex101.com/r/75pRcf/1 then you can use other functions or aggregations. Ciao. Giuseppe
There are several ready-made apps aiming at that. You can also try to look on your index data "backwards" and compare with - let's say - last day if you don't want to have a predefined set of source... See more...
There are several ready-made apps aiming at that. You can also try to look on your index data "backwards" and compare with - let's say - last day if you don't want to have a predefined set of sources/sourcetypes/hosts/whatever that you will look for but that approach is prone to timing out your alerts - if you search 3 days backwards, it won't alert you of the sources that used to send data 4 days ago but stopped after that. So you can only do so much. There are no miracles and Splunk doesn't know what "is supposed" to be sent to it unless you tell it.
Still trying to figure this one out....  
It is deliberately set this way by the IIS TA. EVAL-user = if(cs_username == "-", null(), cs_username) So it's not that Splunk doesn't find it, it's just that the field is set to empty value when t... See more...
It is deliberately set this way by the IIS TA. EVAL-user = if(cs_username == "-", null(), cs_username) So it's not that Splunk doesn't find it, it's just that the field is set to empty value when there is nothing there. Looking for cs_username!=* or NOT cs_username=* (these are not equivalent in general but in this case both can be used)
Unable to fetch any data from Ubuntu UF which should be reporting to cloud splunk.   1) Installed splunk UF 9.2.0 and installed credentials package from splunk cloud too. 2) ports are open and tra... See more...
Unable to fetch any data from Ubuntu UF which should be reporting to cloud splunk.   1) Installed splunk UF 9.2.0 and installed credentials package from splunk cloud too. 2) ports are open and traffic is allowed  3) No error in splunkd.log. 4) Currently no inputs are configured, checking data connectivity by internal logs. check data as index=_internal source=*metric.log* Splukd.log shows below warnings only 02-16-2024 15:53:30.843 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/search_messages.log'. 02-16-2024 15:53:30.852 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log'. 02-16-2024 15:53:30.859 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 02-16-2024 15:53:30.876 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/mergebuckets.log'. 02-16-2024 15:53:30.885 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/wlm_monitor.log'. 02-16-2024 15:53:30.891 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/license_usage_summary.log'. 02-16-2024 15:53:30.898 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/searchhistory.log'. 02-16-2024 15:53:30.907 +0000 INFO WatchedFile [156345 tailreader0] - Will begin reading at offset=2859 for file='/opt/splunkforwarder/var/log/watchdog/watchdog.log'. 02-16-2024 15:53:31.112 +0000 INFO AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Connected to idx=1.2.3.5:9997:2, pset=0, reuse=0. autoBatch=1 02-16-2024 15:53:31.112 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.5:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=1 02-16-2024 15:54:00.446 +0000 INFO ScheduledViewsReaper [156309 DispatchReaper] - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_threads=2. 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_jobs=5. 02-16-2024 15:54:00.447 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.5:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=21 Please assist.
1. This is a quite old thread. You'd be better off by starting a new one (possibly putting link to this one for reference). 2. Generally speaking, as a good practice:  - don't mix management with p... See more...
1. This is a quite old thread. You'd be better off by starting a new one (possibly putting link to this one for reference). 2. Generally speaking, as a good practice:  - don't mix management with package manager with manually dropping in files. It can end badly. - If you have a package for your system it's often (although not always; there are sometimes very badly built packages) a better solution I'm not sure about Splunk but depending on how/where you installed your software before, the RPM might not fit exactly that layout. So while you can try to install RPM package over a tarball-based /opt/splunk, I think I'd try to go for backup/remove/install/restore. Oh, and don't try to mess with your production server without testing it in dev environment.
Anybody got opinions on the opposite situation? I've always upgraded splunk using a tarball and extracted over top the prior /opt/splunk installation: ~6 upgrades. Now I'd like to switch to RPM, fo... See more...
Anybody got opinions on the opposite situation? I've always upgraded splunk using a tarball and extracted over top the prior /opt/splunk installation: ~6 upgrades. Now I'd like to switch to RPM, for all the stated advantages.  Any issues I need to worry about by installing the RPM over a prior tarball install?  
Here is described how it can do https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/DataSelfStorage But remember that this remove all data from that index, only hot buckets are available!
Hi Could you also tell what you want to do with those values? Are you looking events based on it for only 1 month at time or what you are looking for? Basically do something like this: <form versi... See more...
Hi Could you also tell what you want to do with those values? Are you looking events based on it for only 1 month at time or what you are looking for? Basically do something like this: <form version="1.1" theme="light"> <label>Create month base time picker</label> <fieldset submitButton="false"> <input type="dropdown" token="timePicker" searchWhenChanged="true"> <label>Month Year</label> <fieldForLabel>Month</fieldForLabel> <fieldForValue>rTime</fieldForValue> <search> <query>| makeresults count=12 | streamstats count | eval rTime = relative_time(now(), "-". count . "mon") | eval Month = strftime(rTime,"%b-%y") | table rTime Month</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> </form> Then add this token "$timePicker$" to your other searches like <panel> <title>Simple timechart</title> <chart> <title>SOMETHING</title> <search> <query>YOUR QUERY HERE</query> <earliest>$timePicker.earliest$</earliest> <latest>$timePicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> </chart> </panel> But when you want to show event e.g. within one month at time, you must use another tokens on dashboard. You must use set those e.g. tokEarliest and tokLatest based on first and last second based on your selection and then use those instead of $timePicker.earliest$ and $timePicker.latest$. r. Ismo
Thanks for your response. Let us suppose I want to export all the contents of a particular index to s3 buckets, can we do it?
Hi basically yes, or at least all warm and cold, but this means that you will frozen those and those are not searchable anymore. Just decrease your retention time for all indexes as small as needed ... See more...
Hi basically yes, or at least all warm and cold, but this means that you will frozen those and those are not searchable anymore. Just decrease your retention time for all indexes as small as needed and ensure that you have defined and configured your own S3 buckets for storing frozen data. Otherwise you will lost your events!!! BUT I'm not sure if this is what you are looking for? Can you describe your real issue, not your solution for it? r. Ismo
As it has said earlier you couldn't get 100% sure answer for this. You should look those old answers to see what you could try to get some answers.
Thanks @ITWhisperer but this doesn't seem to work.  I've simulated the average request time being over 1 second in the logs and this search returns alert=1 straight away.  When I'd want it to return ... See more...
Thanks @ITWhisperer but this doesn't seem to work.  I've simulated the average request time being over 1 second in the logs and this search returns alert=1 straight away.  When I'd want it to return this when searching the 2nd time window to say that we'd actually recovered from this high request time. Can you explain what is happening from streamstats onwards as I can't get my head round it?  I don't get how this separates the 2 time windows.  I've been running the search manually looking back 30 mins and it just returns alert=1 every time. FYI initially got my time window wrong and it is actually checking every 15 minute window so I'd want to compare the 2x 15 min windows over the last 30 mins to see if it has recovered.  I don't think this makes a difference to the query though.