All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @cfernaca  The duplicate field extractions are likely due to multiple or conflicting search-time field extraction configurations applying to the integration sourcetype. Since INDEXED_EXTRACTIONS ... See more...
Hi @cfernaca  The duplicate field extractions are likely due to multiple or conflicting search-time field extraction configurations applying to the integration sourcetype. Since INDEXED_EXTRACTIONS = none is set, the issue occurs at search time.  KV_MODE = json is generally sufficient for JSON data, but other configurations (e.g., REPORT-* or EXTRACT-* in props.conf) might be redundantly extracting the same fields. Check for conflicting configurations usingbtool, Run this command on your Search Head's CLI to see all applied settings for your sourcetype and the source props.conf files: splunk btool props list integration --debug Look for REPORT-* or EXTRACT-* configurations that might be extracting fields already handled by KV_MODE = json.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Good afternoon, I have a monitoring architecture with three nodes with the Splunk Enterprise product. One node acts as SearchHead, one as Indexer and one for all other roles. I have a HEC on the ind... See more...
Good afternoon, I have a monitoring architecture with three nodes with the Splunk Enterprise product. One node acts as SearchHead, one as Indexer and one for all other roles. I have a HEC on the indexer node to be able to receive data from third parties. The sourcetype configured to store the data is as follows: [integration] DATETIME_CONFIG = CURRENT LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = test disabled = false pulldown_type = 1 INDEXED_EXTRACTIONS = none KV_MODE = json My problem is that when I fetch the data, there are events where the field extraction is done in duplicate and others where the field extraction is done only once. Please, can you help me? Best regards, thank you very much  
@ND1 Most probably when an incident is not showing up, it's either suppression blocking the events or the notable action not being properly configured. 1. Check if notable events are being created... See more...
@ND1 Most probably when an incident is not showing up, it's either suppression blocking the events or the notable action not being properly configured. 1. Check if notable events are being created: index=notable earliest=-7d | search source="*your_correlation_search_name*" 2. Check suppression settings: | rest /services/saved/searches | search title="*your_correlation_search_name*" | table title, alert.suppress, alert.suppress.period Try below: If alert.suppress=1, try disabling suppression temporarily in ES > Content Management Edit your correlation search and ensure "Notable" action is checked and saved Test your correlation search manually first to confirm it returns results If this Helps, Please Upvote.  
I have checked the file. Besides, one of the messages (Will begin reading at offset=13553847 for file='/var/log/messages') would indicate that Splunk has found the file which has contents. The host ... See more...
I have checked the file. Besides, one of the messages (Will begin reading at offset=13553847 for file='/var/log/messages') would indicate that Splunk has found the file which has contents. The host I have problem with is actually one in a pair (A/B) of servers. I do see events in Splunk from the B server as expected. I also find events from the A server from /var/log/secure, just nothing from /var/log/messages.
Why is my Correlation Search not showing up in Incident Review?” “How do I determine why a Correlation Search isn’t creating a notable event?”
Hello family, here is a concern I am experiencing: I have correlation searches that are activated or enable, and to verify that they are receiving CIM-compliant data that are required to make it work... See more...
Hello family, here is a concern I am experiencing: I have correlation searches that are activated or enable, and to verify that they are receiving CIM-compliant data that are required to make it work, when I search their name one-by-one on a Splunk Enterprise Security dashboard pane to make sure the dashboard populates properly, nothing comes out. But when I run the query of this correlation searches on the Search and Reporting pane of Splunk, I will see the events populate. I have gone through the Splunk documentation on CIM-Compliance topics already and watched some You Tube videos, but still don't get it...Please any extra sources from anyone that can help me understand very well will be very welcome. Thanks and best regards.
@sainag_splunk's answer is correct, but be aware that not all searches specify an index name and those that do might do so using a wildcard.  This is a Hard Problem addressed in the .conf24 talk "Max... See more...
@sainag_splunk's answer is correct, but be aware that not all searches specify an index name and those that do might do so using a wildcard.  This is a Hard Problem addressed in the .conf24 talk "Maximizing Splunk Core: Analyzing Splunk Searches Using Audittrail and Native Splunk Telemetry", which offers some ways to get the indexes used by a search.
  I think there is no easy way because, as @richgalloway  mentioned, there is no warning when a search tries to use an index that doesn't exist. Even if you choose result_count=0, you will get a lot... See more...
  I think there is no easy way because, as @richgalloway  mentioned, there is no warning when a search tries to use an index that doesn't exist. Even if you choose result_count=0, you will get a lot of false positives. The best way is to run a search against the saved searches and validate with your existing indexes. The most reliable approach is the manual two-step method: Step 1: Get scheduled searches and their referenced indexes     | rest /services/saved/searches | search disabled=0 is_scheduled=1 | rex field=search "index\s*=\s*(?<referenced_index>[^\s\|]+)" | stats values(title) as searches by referenced_index Step 2: Get your current indexes     | rest /services/data/indexes | fields title If this Helps, Please Upvote.
The message is generated when the scheduler sees it's time to run a search, but a previous instance of the same search is still running.  Usually, that happens because the search is trying to do too ... See more...
The message is generated when the scheduler sees it's time to run a search, but a previous instance of the same search is still running.  Usually, that happens because the search is trying to do too much can't complete the query before the next run interval. You have a few options: Make the search more efficient so it runs faster and completes sooner Increase the schedule interval Run the search at a less-busy time Re-schedule other resource-intensive searches so they don't compete with the one getting skipped.
Hi @mchoudhary  This typically happens when a scheduled search takes longer to complete than the interval between its scheduled runs, or when multiple demanding searches are scheduled to run at the ... See more...
Hi @mchoudhary  This typically happens when a scheduled search takes longer to complete than the interval between its scheduled runs, or when multiple demanding searches are scheduled to run at the same time, exhausting the available concurrency slots. The cron schedule 2-59/5 * * * * means your search attempts to run every 5 minutes (at 2, 7, 12, ... minutes past the hour). The 33.3% skip rate (1 out of 3 runs skipped) strongly suggests that this search takes longer than 10 minutes but less than 15 minutes to complete, and the concurrency limit for these types of searches under your user/role context is effectively 2. This pattern would cause two instances of the search to start and run concurrently, and the third attempt would be skipped because both available slots are still occupied. Before changing any limits to the user/role etc I would investigate how long the search is running for, and if possible improve the performance of the search.  Please can you check how long it is taking to run? If you are able to post the offending search here we may be able to help improve its performance. Try the following search to get the duration of your scheduled searches: index=_audit search_id="'scheduler*" info=completed | stats avg(total_run_time) as avg_total_run_time, p95(total_run_time) as p95_total_run_time by savedsearch_name  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, I have been observing 1 skipped search error indicating on my CMC. Error is - "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been r... See more...
Hi Team, I have been observing 1 skipped search error indicating on my CMC. Error is - "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached" Percentage skipped is 33.3%.  I saw many solutions online, stating to increase the user/role search job limit or either make changes in the limits.conf (which I don't have access to) but couldn't figure out or get clear explanations, Can someone help me to solve this. Also, the saved search in question is running on a cron of 2-59/5 * * * * for time range of 5 mins.  Please suggest.  
Thanks for your previous guidance. I've retried the process and made a full backup of both /opt/splunk/etc and /opt/splunk/var just in case. I then proceeded with a clean reinstallation of Splunk En... See more...
Thanks for your previous guidance. I've retried the process and made a full backup of both /opt/splunk/etc and /opt/splunk/var just in case. I then proceeded with a clean reinstallation of Splunk Enterprise version 9.4.3. Everything seems to be working fine except for the KV Store, which is failing to start. Upon investigation, I found that the version used previously (4.0.x) is no longer compatible with Splunk 9.4.3, which likely makes my backup of the KV Store unusable under the new version. Additionally, even after the KV Store upgrade attempt, my Universal Forwarders still do not appear in the Forwarder Management view, even though they are actively sending data and I can see established TCP connections on port 9997.
There is no warning when a search tries to use an index that doesn't exist.
Very helpful thanks!
Hi, I'm trying to clean up an old splunk cloud instance. one thought that occurred to me is find scheduled searches that reference indexes that don't exist anymore, they have long been cleaned up an... See more...
Hi, I'm trying to clean up an old splunk cloud instance. one thought that occurred to me is find scheduled searches that reference indexes that don't exist anymore, they have long been cleaned up and deleted but scheduled searches may still exist against these deleted indexes. I tried looking in internal indexes but don't see any sort of warning messages for index does not exist. Does anyone know if such a warning message shows up that I can then use to deactivate these searches? thanks in advance Dave
  @NullZero The golang versions are embedded in Splunk binaries. To check your current versions: cd $SPLUNK_HOME/bin ./mongodump --version ./mongorestore --version You should complete the up... See more...
  @NullZero The golang versions are embedded in Splunk binaries. To check your current versions: cd $SPLUNK_HOME/bin ./mongodump --version ./mongorestore --version You should complete the upgrade to 9.4.2+ immediately - after checking the current golang version if it has critical vulnerabilities that need patching. If this Helps, Please Upvote.
Hi @tgulgund  For the benefit of others who find this answer, this is slightly incorrect. It *is* possible for a single setting for all your datasources, If you want to refresh all datasources on a ... See more...
Hi @tgulgund  For the benefit of others who find this answer, this is slightly incorrect. It *is* possible for a single setting for all your datasources, If you want to refresh all datasources on a dashboard studio dashboard then update the defaults as per my previous message. @livehybrid wrote: Hi @tgulgund @PrewinThomas  You can set a default refresh time which will apply automatically to all data sources (unless a specific datasource is overwritten, edit the source of your dashboard and find the "defaults" section, under defaults->dataSources->ds.search->options create a new "refresh" key with a value containing your intended refresh interval, such as this: { "title": "testing", "description": "", "inputs": {}, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "-24h@h", "latest": "now" }, "refresh": "60s" } } } }, "visualizations": { ... ... } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I added "refresh": "5m", "refreshType": "delay" to my base search and it works for all the chain searches
IHAC running a large C11 On-Prem stack. They are in a bit of a pickle due to unsupported RHEL 7 and halfway through an upgrade from 9.3.x to 9.4.x and are seeking advice on the recent CVE's. My prob... See more...
IHAC running a large C11 On-Prem stack. They are in a bit of a pickle due to unsupported RHEL 7 and halfway through an upgrade from 9.3.x to 9.4.x and are seeking advice on the recent CVE's. My problem / question is what version of 'golang' is installed with their particular version of Splunk, this is in response to SVD-2025-0603 | Splunk Vulnerability Disclosure It is not clear how to verify this.  
Not quite, _indextime will always the time its indexed, however _time is usually derived/determined from the data. For some reason Splunk is detecting the incorrect time. Try updating your props lik... See more...
Not quite, _indextime will always the time its indexed, however _time is usually derived/determined from the data. For some reason Splunk is detecting the incorrect time. Try updating your props like this: [source::.../var/log/splunk/SA-ldapsearch.log] sourcetype = SA-ldapsearch DATETIME_CONFIG = CURRENT  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing