All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Sorry for such a late reply.  You can use either DBConnect or send the rds logs via AWS Cloudwatch and bring the data to Splunk using AWS TA for Splunk using sqs based s3 inputs.
What does mongod.log say? What version of Splunk?  What version of KVStore?  Are you using mmapv1 or wiredTiger?
KV Store changed status to failed. KVStore process terminated.. 10/2/2025, 12:23:23 am Failed to start KV Store process. See mongod.log and splunkd.log for details. 10/2/2025, 12:23:23 am KV Store pr... See more...
KV Store changed status to failed. KVStore process terminated.. 10/2/2025, 12:23:23 am Failed to start KV Store process. See mongod.log and splunkd.log for details. 10/2/2025, 12:23:23 am KV Store process terminated abnormally (exit code 4, status PID 6147 killed by signal 4: Illegal instruction). See mongod.log and splunkd.log for details.     this above mention issues are showing .   #kvstore @kvstore @splunk
Hi @HaakonRuud, When mode = single, it's implied the stats setting is ignored. As a result, the event will contain an average of samples collected every samplingInterval millseconds over interval se... See more...
Hi @HaakonRuud, When mode = single, it's implied the stats setting is ignored. As a result, the event will contain an average of samples collected every samplingInterval millseconds over interval seconds. Your collection interval is 60 seconds (1 minute): interval = 60 Your mstats time span is also 1 minute: span=1m As a result, you only have 1 event per time interval, so the mean and maximum will be equivalent.
Hi @madhav_dholakia, Can you provide more context for the defaults object and a small sample dashboard that doesn't save correctly? Is defaults defined at the top level of the dashboard? I.e.: { ... See more...
Hi @madhav_dholakia, Can you provide more context for the defaults object and a small sample dashboard that doesn't save correctly? Is defaults defined at the top level of the dashboard? I.e.: { "visualizations": { ... }, "dataSources": { ... }, "defaults": { ... }, ... }  
Hi @splunk_user_99, Which version of MLTK do you have installed? The underlying API uses a simple payload: {"name":"my_model","app":"search"} where name is the value entered in New Main Model Tit... See more...
Hi @splunk_user_99, Which version of MLTK do you have installed? The underlying API uses a simple payload: {"name":"my_model","app":"search"} where name is the value entered in New Main Model Title and app is derived from the value selected in Destination App. The app list is loaded when Operationalize is clicked and sorted alphabetically by display name. On submit, the request payload is checked to verify that it contains only the 'app' and 'name' keys. Do you have the same issue in a sandboxed (private, incognito, etc.) browser session with extensions disabled?
Hi @splunklearner , I guess the answer really is "it depends" however in this scenario we are overwriting the original data with just the JSON, rather than adding an additional extracted field.  Se... See more...
Hi @splunklearner , I guess the answer really is "it depends" however in this scenario we are overwriting the original data with just the JSON, rather than adding an additional extracted field.  Search time field extractions/eval/changes are executed every time you search the data, and in some cases need to be evaluated before the search is filtered down. For example if you search for "uri=/test" then you may find that at search time it needs to process all events to determine the uri field for each event, before it can then filter down. Being able to search against the URI without having to do any modification to every event means it should be faster.  The disadvantage of index-time extractions is that it doesnt apply retrospectively to data you already have,  whereas search time will apply to everything currently indexed.
I have identified that aggqueue and tcpout_Default_autolb_group queue is having most issue which addregator process and one sourcetype have most cpu utilization, no how can i fix this
Hi @kiran_panchavat , This is present in my current props.conf which is there is Cluster Manager for this sourcetype (which is copied from other sourcetype)-- [sony_waf] TIME_PREFIX = ^ MAX_TIMES... See more...
Hi @kiran_panchavat , This is present in my current props.conf which is there is Cluster Manager for this sourcetype (which is copied from other sourcetype)-- [sony_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g SEDCMD-formatxml =s/></>\n</g LINE_BREAKER = ([\r\n]+)[A-Z][a-z] {2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 Now do I need to add here in this props.conf and push it to indexers? Or create new props.conf in Deployer which includes your props.conf stanza and push it to search heads?
@Nawab  These are the 4 main scenarios I would imagine in a simple forwarder-receiver topology: A. forwarder is crashing, while it is unable to forward data to the receiver (regardless if it's due ... See more...
@Nawab  These are the 4 main scenarios I would imagine in a simple forwarder-receiver topology: A. forwarder is crashing, while it is unable to forward data to the receiver (regardless if it's due to unreachable receiver, network issues or incorrect/missing outputs.conf or alike): in-memory data will not be moved into the persistent queue, even if the persistent queue still has got enough space to accomodate the in-memory queue data. B. forwarder is gracefully shut down, while it is unable to forward data to the receiver (regardless if it's due to unreachable receiver, network issues or incorrect/missing outputs.conf or alike): in-memory data will not be moved into the persistent queue, even if the persistent queue still has got enough space to accomodate the in-memory queue data. C. forwarder is crashing, but has been able to forward data to the receiver so far: persistent queue data will be preserved on disk, however in-memory data is very likely to be lost. D. forwarder is gracefully shut down, but has been able to forward data to the receiver so far: both persistent queue and in-memory data will be forwarded (and indexed) before the forwarder is fully shut-down.  
@Karthikeya ++ 
@kiran_panchavat , I checked this my queues are full but my question is when qeues are back to normal why some Ufs are not back and we need to restart the service
@Karthikeya  Please check this : https://community.splunk.com/t5/All-Apps-and-Add-ons/Cluster-Master-Conf-updates-via-Config-Explorer/m-p/444404 
@Nawab  Probably it contains something which broke the data pipeline. You should start with the next documents to understanding what can cause this issue: https://docs.splunk.com/Documentation/Splu... See more...
@Nawab  Probably it contains something which broke the data pipeline. You should start with the next documents to understanding what can cause this issue: https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline  https://conf.splunk.com/files/2019/slides/FN1570.pdf  https://docs.splunk.com/Documentation/Splunk/latest/DMC/IndexingDeployment 
@Nawab  Useful Pipeline Searches with metrics.log:-  How much time is Splunk spending within each pipeline? index=_internal source=*metrics.log* group=pipeline | timechart sum(cpu_seconds) by name... See more...
@Nawab  Useful Pipeline Searches with metrics.log:-  How much time is Splunk spending within each pipeline? index=_internal source=*metrics.log* group=pipeline | timechart sum(cpu_seconds) by name How much time is Splunk spending within each processor? index=_internal source=*metrics.log* group=pipeline | timechart sum(cpu_seconds) by processor What is the 95th percentile of measured queue size? index=_internal source=*metrics.log* group=queue | timechart perc95(current_size) by name    
@Nawab  metrics.log*:-  group=queue displays the data to be processed current_size can identify which queues are the bottlenecks blocked=true indicates a busy pipeline Checking metrics.log acros... See more...
@Nawab  metrics.log*:-  group=queue displays the data to be processed current_size can identify which queues are the bottlenecks blocked=true indicates a busy pipeline Checking metrics.log across the topology reveals the whole picture. An occasional queue filling up does not indicate an issue. It becomes an issue when it remains full and starts to block other queues. index=_internal source=*metrics.log host=<your-hostname> group IN(pipeline, queue) 02-23-2019 01:08:43.802 +0000 INFO Metrics - group=queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=968, largest_size=968, smallest_size=968 02-23-2019 01:10:39.802 +0000 INFO Metrics - group=pipeline, name=typing, processor=sendout, cpu_seconds=0.05710199999999998, executes=134716, cumulative_hits=1180897  
@Nawab Ensure there are no network connectivity problems between the UFs and the HFs. Sometimes, intermittent network issues can cause the UFs to get stuck. Check the queue size on the UFs. If the q... See more...
@Nawab Ensure there are no network connectivity problems between the UFs and the HFs. Sometimes, intermittent network issues can cause the UFs to get stuck. Check the queue size on the UFs. If the queue is full, the UF might stop processing new logs until there is space available. Even though you mentioned that CPU and RAM utilization is normal, it might be worth checking if there are any spikes or unusual patterns in resource usage.If the HF is overloaded, it might not be able to process logs from the UFs efficiently. Please check the queues on the UF and Heavy Forwarder (HF), as they are likely reaching capacity. Consider increasing the pipeline. Verify the metrics.log on the UF &  Heavy Forwarder to see if any queues are getting blocked. You can find the log at: cat /opt/splunk/var/log/splunk/metrics.log | grep -i "blocked=true"  
Hi @livehybrid , I heard that search time extractions are more better than index time due to performance issues? Is it so? Please clear fy
@splunklearner  If you go down the ingest time approach then you will add the props/transforms.conf within an app in your manager-apps folder on your Cluster Manager and then push out to your indexe... See more...
@splunklearner  If you go down the ingest time approach then you will add the props/transforms.conf within an app in your manager-apps folder on your Cluster Manager and then push out to your indexers. No changes should be required for your searchheads if you go down that route, but feel free to evaluate the alternatives provided in this post too. I hope this helps. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
How to install config explorer application on clustered enviornment? We have deployment server, cluster manager, deployer. We push apps through deployment server to CM and deployer. Please tell me... See more...
How to install config explorer application on clustered enviornment? We have deployment server, cluster manager, deployer. We push apps through deployment server to CM and deployer. Please tell me how to do this by downloading from Splunkbase?