All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two pl... See more...
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two places? if so, what should we do to produce cooked data AND forward it to the indexer? 
We have a case where we can search and find events that match the search criteria. The client would like to see the events that are prior in time to the one that we matched via the SPL. Can we do that?
Is there a rest api available for Notable Suppression ? to get the suppresssion details and modify them via rest api
Hi guys, I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue. 01-17-2025 04... See more...
Hi guys, I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue. 01-17-2025 04:33:12.580 -0600 INFO TailReader [1853894 batchreader0] - Will retry path="/apps2.log" after deferring for 10000ms, initCRC changed after being queued (before=0x47710a7c475501b6, after=0x23c7e0f63f123bf1). File growth rate must be higher than indexing or forwarding rate. 01-17-2025 04:20:24.672 -0600 WARN TailReader [1544431 tailreader0] - Enqueuing a very large file=/apps2.log in the batch reader, with bytes_to_read=292732393, reading of other large files could be delayed I would greatly appreciate your assistance. Thank you.
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACC... See more...
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACCOUNT\")", " AccountSize=$(du -sh . | awk '{print $1}')", " ProfilesSize=$(du -chd1 --exclude={events,segments,data_integrity,api} | tail -n1 | awk '{print $1}')", " NAT=$(curl -s ifconfig.me)", " echo \"AccountId: $AccountId, TotalSize: $AccountSize, ProfilesSize: $ProfilesSize\" >> \"$LOG_FILE\"", " fi", " done"   I have forwarded this log file to Splunk using the Splunk Forwarder. This script appends new log entries to the file after successfully completing each loop. However, I am not seeing the logs with the correct timestamps, as shown in the attached screenshot. The logs are from 2022, but I started sending them to Splunk on 17/01/2025. Additionally, the Splunk Forwarder is sending some logs as single-line events and others as multi-line events. Could you explain why this is happening?  
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purpos... See more...
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purposes. Datadog offers solutions, such as their AgentCheck package (https://docs.datadoghq.com/developers/custom_checks/write_agent_check/), and their repository (https://github.com/DataDog/integrations-core) provides several integrations for similar use cases. Is there an equivalent solution or approach available in Splunk for achieving this functionality? Looking forward to your suggestions and guidance! Thanks!
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only ... See more...
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only has an option to send logs over syslog , how can this be achieved while using Splunk cloud. What are the options for Data input here. If someone can elaborate. Thanking you in advance, regards, Moh
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help ... See more...
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help me in this > 01-17-2025 06:32:15.605 +0000 INFO TailReader [1654 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log'
 I am unable to select the option in dropdown or type anything (first part of URL) in the "Select Cloud Stack" while creating support case. Dropdown for adding Cloud Stack Name seems to be stuck, t... See more...
 I am unable to select the option in dropdown or type anything (first part of URL) in the "Select Cloud Stack" while creating support case. Dropdown for adding Cloud Stack Name seems to be stuck, tried other browsers too
Iam not able to see the file content in indexer,  After restarting the universal Forwarder what can be the reason 
I am trying to execute the sample command in Splunk MLTK. For some reason, I am getting an error everytime I run a stats command after the sample command.  index=_internal | sample partitions=3 see... See more...
I am trying to execute the sample command in Splunk MLTK. For some reason, I am getting an error everytime I run a stats command after the sample command.  index=_internal | sample partitions=3 seed=42 | stats count by action, partition_number Search error Error in 'sample' command: The specified field name for the partition already exists: partition_number   I tried providing different field name and it is still the same error. If I remove stats command and try running the same search multiple times, it works without any issues. What could be the reason ?  
I don't see a create new token option under Settings>Token. Anyone else having this issue? Not sure if its a permission related issue, but others in the team also can't create a new token.
We have a custom dashboard in Splunk that has a few filters, one of which is a multiselect. This dashboard allows users to perform CRUD operations with POA&Ms. The multiselect in question lists all P... See more...
We have a custom dashboard in Splunk that has a few filters, one of which is a multiselect. This dashboard allows users to perform CRUD operations with POA&Ms. The multiselect in question lists all POA&M statuses that have been previously created, filtering the results displayed in the table. The filter works fine for searching results for the table. The issue is that if someone creates a new POA&M with a status that hasn't been used yet, i.e. "Closed", the page must be refreshed for the multiselect to execute the search powering it and display "Closed" as an option. Is there a way to "refresh" the multiselect with Javascript after a new POA&M is created? The POA&M CRUD operations are performed with JS and Python btw. Here's the XML of the multiselect for reference:  
So I have an Index which contains the following "Starting iteration"on 1 event and "Stopping iteration" on another event I want to get the time taken from event 1 to event 2. And if over 15 mins t... See more...
So I have an Index which contains the following "Starting iteration"on 1 event and "Stopping iteration" on another event I want to get the time taken from event 1 to event 2. And if over 15 mins then I can setup an alert to warn me   
Hi, I'm trying to get a query for a table containing all the indexes that do not have a self storage attached, but I couldn't find anything useful. Does anyone has an idea of how to do it?   Thanks!
I'm seeing hundreds of these errors in the internal splunkd logs 01-16-2025 12:05:00.584 -0600 ERROR UserManagerPro [721361 SchedulerThread] - user="nobody" had no roles Is this a known bug? I'... See more...
I'm seeing hundreds of these errors in the internal splunkd logs 01-16-2025 12:05:00.584 -0600 ERROR UserManagerPro [721361 SchedulerThread] - user="nobody" had no roles Is this a known bug? I'm guessing knowledge objects with no owner defined is causing this. It's annoying because it fills the internal logs with noise. Is there an easy workaround without having to re-assign all objects without a valid owner/?
Hello, I wanted to know where I should keep this attribute KV_MODE=json to extract the json fields automatically? In Deployment server or manager node or deployer? We have props.conf in a app in DS... See more...
Hello, I wanted to know where I should keep this attribute KV_MODE=json to extract the json fields automatically? In Deployment server or manager node or deployer? We have props.conf in a app in DS. DS push that app to manager node. And manager will distribute that app to peer nodes. Can I add this in that props.conf? Or any alternative please suggest.
Hi,   I am trying to push the configuration bundle from the CM to the indexers. I keep getting the error message "Last Validate and Check Restart:  Unsuccessful" The validation is done for one of ... See more...
Hi,   I am trying to push the configuration bundle from the CM to the indexers. I keep getting the error message "Last Validate and Check Restart:  Unsuccessful" The validation is done for one of the indexers, and it's 'checking for restart' for the other two indexers. When I checked the last change date for all the indexers, only one of them has been updated and the other 2 are not. But it's opposite to what is shown in the UI of the CM.   Regards, Pravin    
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for ... See more...
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for the same patter of Mail Delivery Subsystem that happens when sending an email from Gmail (or any other) to a non-existing mail. Bud didn't find anything in _internal index, nor with a rest to saved search and index=mail is empty. Amy idea?
Hi, Looking at the activity of the Splunkd threads on the indexers, I've seen in the monitoring console that sometimes there is no activity for a period of 1 minute. Is this normal? evidence  ... See more...
Hi, Looking at the activity of the Splunkd threads on the indexers, I've seen in the monitoring console that sometimes there is no activity for a period of 1 minute. Is this normal? evidence   Regards, thank you very much