All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Lien unfortunately, its not supported for the splunkcloud trial version. https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/TypesofSplunkClouddeployment       If this Helps, Pleas... See more...
@Lien unfortunately, its not supported for the splunkcloud trial version. https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/TypesofSplunkClouddeployment       If this Helps, Please Upvote!
Hi @livehybrid ,  Thank you for your reply. I only created one group. I am using Splunk cloud trial version. Is there any limitation for setting up SSO? Also another problem is once it shows that e... See more...
Hi @livehybrid ,  Thank you for your reply. I only created one group. I am using Splunk cloud trial version. Is there any limitation for setting up SSO? Also another problem is once it shows that error page, I could not logon with local user anymore. It redirect to Okta when I access. Then I lost opportunity to logon Splunk Cloud.
Thank you very much for the detailed comments. I edited my post with some details. I did not suspect anything with regards to the monitor stanza because another host with essentially the same config... See more...
Thank you very much for the detailed comments. I edited my post with some details. I did not suspect anything with regards to the monitor stanza because another host with essentially the same configuration works as expected. Where it doesn't work, I do find events from the /var/log/secure (from the same monitor stanza). I will run a btool debugging and report back. Thanks again!
Thank you for your reply. I edited my post with some more details. It's a custom TA with a simple file monitor stanza. I don't think the inputs configuration is an issue.
@ND1 It's not easy to troubleshoot without a screen share, but typically I recommend: Check the time filter on each dashboard panel Click the magnifying glass on the panel to view the search Expa... See more...
@ND1 It's not easy to troubleshoot without a screen share, but typically I recommend: Check the time filter on each dashboard panel Click the magnifying glass on the panel to view the search Expand the search to see what's actually running - you'll typically see macros there Expand those macros using Ctrl + Shift + E (Windows) or Cmd + Shift + E (Mac) Run the expanded search with a broader time range to see if data appears also check Time range mismatch: The ES dashboard is looking for recent data while your correlation search finds older events Data model acceleration: Your correlation search might need CIM-compliant field mappings Dashboard filters: Check if the dashboard has hidden drilldown tokens or filters applied check out this user guide: https://help.splunk.com/en/splunk-enterprise-security-8/user-guide/8.0/analytics/available-dashboards-in-splunk-enterprise-security Additional help: If you have Splunk OnDemand Services credits available, I'd recommend using them to walk through this issue with a Splunk expert who can troubleshoot in real-time. If this Helps, Pleas Upvote.
Hi @cfernaca  The duplicate field extractions are likely due to multiple or conflicting search-time field extraction configurations applying to the integration sourcetype. Since INDEXED_EXTRACTIONS ... See more...
Hi @cfernaca  The duplicate field extractions are likely due to multiple or conflicting search-time field extraction configurations applying to the integration sourcetype. Since INDEXED_EXTRACTIONS = none is set, the issue occurs at search time.  KV_MODE = json is generally sufficient for JSON data, but other configurations (e.g., REPORT-* or EXTRACT-* in props.conf) might be redundantly extracting the same fields. Check for conflicting configurations usingbtool, Run this command on your Search Head's CLI to see all applied settings for your sourcetype and the source props.conf files: splunk btool props list integration --debug Look for REPORT-* or EXTRACT-* configurations that might be extracting fields already handled by KV_MODE = json.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Good afternoon, I have a monitoring architecture with three nodes with the Splunk Enterprise product. One node acts as SearchHead, one as Indexer and one for all other roles. I have a HEC on the ind... See more...
Good afternoon, I have a monitoring architecture with three nodes with the Splunk Enterprise product. One node acts as SearchHead, one as Indexer and one for all other roles. I have a HEC on the indexer node to be able to receive data from third parties. The sourcetype configured to store the data is as follows: [integration] DATETIME_CONFIG = CURRENT LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = test disabled = false pulldown_type = 1 INDEXED_EXTRACTIONS = none KV_MODE = json My problem is that when I fetch the data, there are events where the field extraction is done in duplicate and others where the field extraction is done only once. Please, can you help me? Best regards, thank you very much  
@ND1 Most probably when an incident is not showing up, it's either suppression blocking the events or the notable action not being properly configured. 1. Check if notable events are being created... See more...
@ND1 Most probably when an incident is not showing up, it's either suppression blocking the events or the notable action not being properly configured. 1. Check if notable events are being created: index=notable earliest=-7d | search source="*your_correlation_search_name*" 2. Check suppression settings: | rest /services/saved/searches | search title="*your_correlation_search_name*" | table title, alert.suppress, alert.suppress.period Try below: If alert.suppress=1, try disabling suppression temporarily in ES > Content Management Edit your correlation search and ensure "Notable" action is checked and saved Test your correlation search manually first to confirm it returns results If this Helps, Please Upvote.  
I have checked the file. Besides, one of the messages (Will begin reading at offset=13553847 for file='/var/log/messages') would indicate that Splunk has found the file which has contents. The host ... See more...
I have checked the file. Besides, one of the messages (Will begin reading at offset=13553847 for file='/var/log/messages') would indicate that Splunk has found the file which has contents. The host I have problem with is actually one in a pair (A/B) of servers. I do see events in Splunk from the B server as expected. I also find events from the A server from /var/log/secure, just nothing from /var/log/messages.
Why is my Correlation Search not showing up in Incident Review?” “How do I determine why a Correlation Search isn’t creating a notable event?”
Hello family, here is a concern I am experiencing: I have correlation searches that are activated or enable, and to verify that they are receiving CIM-compliant data that are required to make it work... See more...
Hello family, here is a concern I am experiencing: I have correlation searches that are activated or enable, and to verify that they are receiving CIM-compliant data that are required to make it work, when I search their name one-by-one on a Splunk Enterprise Security dashboard pane to make sure the dashboard populates properly, nothing comes out. But when I run the query of this correlation searches on the Search and Reporting pane of Splunk, I will see the events populate. I have gone through the Splunk documentation on CIM-Compliance topics already and watched some You Tube videos, but still don't get it...Please any extra sources from anyone that can help me understand very well will be very welcome. Thanks and best regards.
@sainag_splunk's answer is correct, but be aware that not all searches specify an index name and those that do might do so using a wildcard.  This is a Hard Problem addressed in the .conf24 talk "Max... See more...
@sainag_splunk's answer is correct, but be aware that not all searches specify an index name and those that do might do so using a wildcard.  This is a Hard Problem addressed in the .conf24 talk "Maximizing Splunk Core: Analyzing Splunk Searches Using Audittrail and Native Splunk Telemetry", which offers some ways to get the indexes used by a search.
  I think there is no easy way because, as @richgalloway  mentioned, there is no warning when a search tries to use an index that doesn't exist. Even if you choose result_count=0, you will get a lot... See more...
  I think there is no easy way because, as @richgalloway  mentioned, there is no warning when a search tries to use an index that doesn't exist. Even if you choose result_count=0, you will get a lot of false positives. The best way is to run a search against the saved searches and validate with your existing indexes. The most reliable approach is the manual two-step method: Step 1: Get scheduled searches and their referenced indexes     | rest /services/saved/searches | search disabled=0 is_scheduled=1 | rex field=search "index\s*=\s*(?<referenced_index>[^\s\|]+)" | stats values(title) as searches by referenced_index Step 2: Get your current indexes     | rest /services/data/indexes | fields title If this Helps, Please Upvote.
The message is generated when the scheduler sees it's time to run a search, but a previous instance of the same search is still running.  Usually, that happens because the search is trying to do too ... See more...
The message is generated when the scheduler sees it's time to run a search, but a previous instance of the same search is still running.  Usually, that happens because the search is trying to do too much can't complete the query before the next run interval. You have a few options: Make the search more efficient so it runs faster and completes sooner Increase the schedule interval Run the search at a less-busy time Re-schedule other resource-intensive searches so they don't compete with the one getting skipped.
Hi @mchoudhary  This typically happens when a scheduled search takes longer to complete than the interval between its scheduled runs, or when multiple demanding searches are scheduled to run at the ... See more...
Hi @mchoudhary  This typically happens when a scheduled search takes longer to complete than the interval between its scheduled runs, or when multiple demanding searches are scheduled to run at the same time, exhausting the available concurrency slots. The cron schedule 2-59/5 * * * * means your search attempts to run every 5 minutes (at 2, 7, 12, ... minutes past the hour). The 33.3% skip rate (1 out of 3 runs skipped) strongly suggests that this search takes longer than 10 minutes but less than 15 minutes to complete, and the concurrency limit for these types of searches under your user/role context is effectively 2. This pattern would cause two instances of the search to start and run concurrently, and the third attempt would be skipped because both available slots are still occupied. Before changing any limits to the user/role etc I would investigate how long the search is running for, and if possible improve the performance of the search.  Please can you check how long it is taking to run? If you are able to post the offending search here we may be able to help improve its performance. Try the following search to get the duration of your scheduled searches: index=_audit search_id="'scheduler*" info=completed | stats avg(total_run_time) as avg_total_run_time, p95(total_run_time) as p95_total_run_time by savedsearch_name  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, I have been observing 1 skipped search error indicating on my CMC. Error is - "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been r... See more...
Hi Team, I have been observing 1 skipped search error indicating on my CMC. Error is - "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached" Percentage skipped is 33.3%.  I saw many solutions online, stating to increase the user/role search job limit or either make changes in the limits.conf (which I don't have access to) but couldn't figure out or get clear explanations, Can someone help me to solve this. Also, the saved search in question is running on a cron of 2-59/5 * * * * for time range of 5 mins.  Please suggest.  
Thanks for your previous guidance. I've retried the process and made a full backup of both /opt/splunk/etc and /opt/splunk/var just in case. I then proceeded with a clean reinstallation of Splunk En... See more...
Thanks for your previous guidance. I've retried the process and made a full backup of both /opt/splunk/etc and /opt/splunk/var just in case. I then proceeded with a clean reinstallation of Splunk Enterprise version 9.4.3. Everything seems to be working fine except for the KV Store, which is failing to start. Upon investigation, I found that the version used previously (4.0.x) is no longer compatible with Splunk 9.4.3, which likely makes my backup of the KV Store unusable under the new version. Additionally, even after the KV Store upgrade attempt, my Universal Forwarders still do not appear in the Forwarder Management view, even though they are actively sending data and I can see established TCP connections on port 9997.
There is no warning when a search tries to use an index that doesn't exist.
Very helpful thanks!
Hi, I'm trying to clean up an old splunk cloud instance. one thought that occurred to me is find scheduled searches that reference indexes that don't exist anymore, they have long been cleaned up an... See more...
Hi, I'm trying to clean up an old splunk cloud instance. one thought that occurred to me is find scheduled searches that reference indexes that don't exist anymore, they have long been cleaned up and deleted but scheduled searches may still exist against these deleted indexes. I tried looking in internal indexes but don't see any sort of warning messages for index does not exist. Does anyone know if such a warning message shows up that I can then use to deactivate these searches? thanks in advance Dave