All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  I think there is no easy way because, as @richgalloway  mentioned, there is no warning when a search tries to use an index that doesn't exist. Even if you choose result_count=0, you will get a lot... See more...
  I think there is no easy way because, as @richgalloway  mentioned, there is no warning when a search tries to use an index that doesn't exist. Even if you choose result_count=0, you will get a lot of false positives. The best way is to run a search against the saved searches and validate with your existing indexes. The most reliable approach is the manual two-step method: Step 1: Get scheduled searches and their referenced indexes     | rest /services/saved/searches | search disabled=0 is_scheduled=1 | rex field=search "index\s*=\s*(?<referenced_index>[^\s\|]+)" | stats values(title) as searches by referenced_index Step 2: Get your current indexes     | rest /services/data/indexes | fields title If this Helps, Please Upvote.
The message is generated when the scheduler sees it's time to run a search, but a previous instance of the same search is still running.  Usually, that happens because the search is trying to do too ... See more...
The message is generated when the scheduler sees it's time to run a search, but a previous instance of the same search is still running.  Usually, that happens because the search is trying to do too much can't complete the query before the next run interval. You have a few options: Make the search more efficient so it runs faster and completes sooner Increase the schedule interval Run the search at a less-busy time Re-schedule other resource-intensive searches so they don't compete with the one getting skipped.
Hi @mchoudhary  This typically happens when a scheduled search takes longer to complete than the interval between its scheduled runs, or when multiple demanding searches are scheduled to run at the ... See more...
Hi @mchoudhary  This typically happens when a scheduled search takes longer to complete than the interval between its scheduled runs, or when multiple demanding searches are scheduled to run at the same time, exhausting the available concurrency slots. The cron schedule 2-59/5 * * * * means your search attempts to run every 5 minutes (at 2, 7, 12, ... minutes past the hour). The 33.3% skip rate (1 out of 3 runs skipped) strongly suggests that this search takes longer than 10 minutes but less than 15 minutes to complete, and the concurrency limit for these types of searches under your user/role context is effectively 2. This pattern would cause two instances of the search to start and run concurrently, and the third attempt would be skipped because both available slots are still occupied. Before changing any limits to the user/role etc I would investigate how long the search is running for, and if possible improve the performance of the search.  Please can you check how long it is taking to run? If you are able to post the offending search here we may be able to help improve its performance. Try the following search to get the duration of your scheduled searches: index=_audit search_id="'scheduler*" info=completed | stats avg(total_run_time) as avg_total_run_time, p95(total_run_time) as p95_total_run_time by savedsearch_name  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, I have been observing 1 skipped search error indicating on my CMC. Error is - "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been r... See more...
Hi Team, I have been observing 1 skipped search error indicating on my CMC. Error is - "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached" Percentage skipped is 33.3%.  I saw many solutions online, stating to increase the user/role search job limit or either make changes in the limits.conf (which I don't have access to) but couldn't figure out or get clear explanations, Can someone help me to solve this. Also, the saved search in question is running on a cron of 2-59/5 * * * * for time range of 5 mins.  Please suggest.  
Thanks for your previous guidance. I've retried the process and made a full backup of both /opt/splunk/etc and /opt/splunk/var just in case. I then proceeded with a clean reinstallation of Splunk En... See more...
Thanks for your previous guidance. I've retried the process and made a full backup of both /opt/splunk/etc and /opt/splunk/var just in case. I then proceeded with a clean reinstallation of Splunk Enterprise version 9.4.3. Everything seems to be working fine except for the KV Store, which is failing to start. Upon investigation, I found that the version used previously (4.0.x) is no longer compatible with Splunk 9.4.3, which likely makes my backup of the KV Store unusable under the new version. Additionally, even after the KV Store upgrade attempt, my Universal Forwarders still do not appear in the Forwarder Management view, even though they are actively sending data and I can see established TCP connections on port 9997.
There is no warning when a search tries to use an index that doesn't exist.
Very helpful thanks!
Hi, I'm trying to clean up an old splunk cloud instance. one thought that occurred to me is find scheduled searches that reference indexes that don't exist anymore, they have long been cleaned up an... See more...
Hi, I'm trying to clean up an old splunk cloud instance. one thought that occurred to me is find scheduled searches that reference indexes that don't exist anymore, they have long been cleaned up and deleted but scheduled searches may still exist against these deleted indexes. I tried looking in internal indexes but don't see any sort of warning messages for index does not exist. Does anyone know if such a warning message shows up that I can then use to deactivate these searches? thanks in advance Dave
  @NullZero The golang versions are embedded in Splunk binaries. To check your current versions: cd $SPLUNK_HOME/bin ./mongodump --version ./mongorestore --version You should complete the up... See more...
  @NullZero The golang versions are embedded in Splunk binaries. To check your current versions: cd $SPLUNK_HOME/bin ./mongodump --version ./mongorestore --version You should complete the upgrade to 9.4.2+ immediately - after checking the current golang version if it has critical vulnerabilities that need patching. If this Helps, Please Upvote.
Hi @tgulgund  For the benefit of others who find this answer, this is slightly incorrect. It *is* possible for a single setting for all your datasources, If you want to refresh all datasources on a ... See more...
Hi @tgulgund  For the benefit of others who find this answer, this is slightly incorrect. It *is* possible for a single setting for all your datasources, If you want to refresh all datasources on a dashboard studio dashboard then update the defaults as per my previous message. @livehybrid wrote: Hi @tgulgund @PrewinThomas  You can set a default refresh time which will apply automatically to all data sources (unless a specific datasource is overwritten, edit the source of your dashboard and find the "defaults" section, under defaults->dataSources->ds.search->options create a new "refresh" key with a value containing your intended refresh interval, such as this: { "title": "testing", "description": "", "inputs": {}, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "-24h@h", "latest": "now" }, "refresh": "60s" } } } }, "visualizations": { ... ... } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I added "refresh": "5m", "refreshType": "delay" to my base search and it works for all the chain searches
IHAC running a large C11 On-Prem stack. They are in a bit of a pickle due to unsupported RHEL 7 and halfway through an upgrade from 9.3.x to 9.4.x and are seeking advice on the recent CVE's. My prob... See more...
IHAC running a large C11 On-Prem stack. They are in a bit of a pickle due to unsupported RHEL 7 and halfway through an upgrade from 9.3.x to 9.4.x and are seeking advice on the recent CVE's. My problem / question is what version of 'golang' is installed with their particular version of Splunk, this is in response to SVD-2025-0603 | Splunk Vulnerability Disclosure It is not clear how to verify this.  
Not quite, _indextime will always the time its indexed, however _time is usually derived/determined from the data. For some reason Splunk is detecting the incorrect time. Try updating your props lik... See more...
Not quite, _indextime will always the time its indexed, however _time is usually derived/determined from the data. For some reason Splunk is detecting the incorrect time. Try updating your props like this: [source::.../var/log/splunk/SA-ldapsearch.log] sourcetype = SA-ldapsearch DATETIME_CONFIG = CURRENT  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @maddop  Is the link you're clicking on a regular a/href link with a target=_blank ?  You may be able to "Execute Javascript" before the click to remove the target from the link, then click as n... See more...
Hi @maddop  Is the link you're clicking on a regular a/href link with a target=_blank ?  You may be able to "Execute Javascript" before the click to remove the target from the link, then click as normal? const links = document.querySelectorAll('a'); // Iterate over each link and remove the 'target' attribute links.forEach(link => { link.removeAttribute('target'); });  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@richgalloway  My understanding is that, if indextime is today so _time should have been the same since there is no delay during ingestion. Here is the default props.conf that comes with the ... See more...
@richgalloway  My understanding is that, if indextime is today so _time should have been the same since there is no delay during ingestion. Here is the default props.conf that comes with the SA-ldapsearch TA: (there is no transform.conf) [source::.../var/log/splunk/SA-ldapsearch.log] sourcetype = SA-ldapsearch [SA-ldapsearch] EXTRACT-vars = Level=.+, (?<log_source>Pid=.+, File=.+, Line=.+), (?<message>.*)
Hi @tech_g706  If there are no fields in the event that you want to use as the _time field when it is ingested then I would recommend forcing the _time to be the ingestion time using the following p... See more...
Hi @tech_g706  If there are no fields in the event that you want to use as the _time field when it is ingested then I would recommend forcing the _time to be the ingestion time using the following props.conf update: [yourSourcetype] # Your other props here # Set _time to current time DATETIME_CONFIG = CURRENT If you want one of the fields in the event to be _time then please share the full raw event and details of the field which should be _time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi., Need to generate a server metric report which gives Server Availability and other Hardware metrics.   Tried Dexter, but it doesn't give the Machine Availability metric. Correct me if any chan... See more...
Hi., Need to generate a server metric report which gives Server Availability and other Hardware metrics.   Tried Dexter, but it doesn't give the Machine Availability metric. Correct me if any changes to be done to get this metric in DEXTER.   Tried the API POST Call, but it gives the report with 1/10 mins granularity data. For eg, if I fetch for last 1 hours it splits the data into 6 times for each 10 mins and shares the data, This makes it very complex to get the desired metric. Tried Dashboard, but have to create multiple widgets or Dashboard to achieve this, even after that the reports generated out if it is not clear. Kindly suggest a way to get the Machine availability metrics and other hardware metrics from AppDynamics as  report.    
What do you expect/want to see for capturetime?  None of the timestamps in the event seem appropriate and all of them will either throw a warning or cause a quarantine bucket to be created. Please s... See more...
What do you expect/want to see for capturetime?  None of the timestamps in the event seem appropriate and all of them will either throw a warning or cause a quarantine bucket to be created. Please share the props.conf settings for that sourcetype.
Hi, I am experiencing issue with  SA-ldapsearch TA.   I am using this search to validate the timestamp index = <index name> | eval bucket=_bkt | eval diff = _indextime - _time | eval indextim... See more...
Hi, I am experiencing issue with  SA-ldapsearch TA.   I am using this search to validate the timestamp index = <index name> | eval bucket=_bkt | eval diff = _indextime - _time | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | eval capturetime=strftime(_time,"%Y-%m-%d %H:%M:%S") | table indextime capturetime diff _raw I can see that, the indextime  = 2025-06-08 05:00:20 but capturetime = 2020-01-13 10:00:01 Splunk is ingesting the latest ldap events but _time field is having timestamps of 2020.  In the raw event, there are multiple timestamps available: "whenCreated":"2018-06-05 10:43:19+00:00 "whenChanged":"2024-02-11 13:52:37+00:00 "pwdLastSet":"2019-07-24T06:41:44.698530Z "lastLogonTimestamp":"2019-07-24T06:41:44.282975Z but I am not able to understand how the TA is extracting the 2020 timestamp from the raw as there is no such timestamp in the raw event.
I am using the Synthetics browser test to track availability of our Citrix client application endpoints. The user journey: access public url sign into account (username, click next, password, cli... See more...
I am using the Synthetics browser test to track availability of our Citrix client application endpoints. The user journey: access public url sign into account (username, click next, password, click sign-in) click the application icon a new window loads with the application Everything works great up-to step 3. I cannot figure out how we track the new window. This is the key part, I need to know if this loads successfully.  I suspect it is not possible based upon reading the documentation but has anyone had a similar issue and successfully solved it?