All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kiran_panchavat , This is present in my current props.conf which is there is Cluster Manager for this sourcetype (which is copied from other sourcetype)-- [sony_waf] TIME_PREFIX = ^ MAX_TIMES... See more...
Hi @kiran_panchavat , This is present in my current props.conf which is there is Cluster Manager for this sourcetype (which is copied from other sourcetype)-- [sony_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g SEDCMD-formatxml =s/></>\n</g LINE_BREAKER = ([\r\n]+)[A-Z][a-z] {2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 Now do I need to add here in this props.conf and push it to indexers? Or create new props.conf in Deployer which includes your props.conf stanza and push it to search heads?
@Nawab  These are the 4 main scenarios I would imagine in a simple forwarder-receiver topology: A. forwarder is crashing, while it is unable to forward data to the receiver (regardless if it's due ... See more...
@Nawab  These are the 4 main scenarios I would imagine in a simple forwarder-receiver topology: A. forwarder is crashing, while it is unable to forward data to the receiver (regardless if it's due to unreachable receiver, network issues or incorrect/missing outputs.conf or alike): in-memory data will not be moved into the persistent queue, even if the persistent queue still has got enough space to accomodate the in-memory queue data. B. forwarder is gracefully shut down, while it is unable to forward data to the receiver (regardless if it's due to unreachable receiver, network issues or incorrect/missing outputs.conf or alike): in-memory data will not be moved into the persistent queue, even if the persistent queue still has got enough space to accomodate the in-memory queue data. C. forwarder is crashing, but has been able to forward data to the receiver so far: persistent queue data will be preserved on disk, however in-memory data is very likely to be lost. D. forwarder is gracefully shut down, but has been able to forward data to the receiver so far: both persistent queue and in-memory data will be forwarded (and indexed) before the forwarder is fully shut-down.  
@Karthikeya ++ 
@kiran_panchavat , I checked this my queues are full but my question is when qeues are back to normal why some Ufs are not back and we need to restart the service
@Karthikeya  Please check this : https://community.splunk.com/t5/All-Apps-and-Add-ons/Cluster-Master-Conf-updates-via-Config-Explorer/m-p/444404 
@Nawab  Probably it contains something which broke the data pipeline. You should start with the next documents to understanding what can cause this issue: https://docs.splunk.com/Documentation/Splu... See more...
@Nawab  Probably it contains something which broke the data pipeline. You should start with the next documents to understanding what can cause this issue: https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline  https://conf.splunk.com/files/2019/slides/FN1570.pdf  https://docs.splunk.com/Documentation/Splunk/latest/DMC/IndexingDeployment 
@Nawab  Useful Pipeline Searches with metrics.log:-  How much time is Splunk spending within each pipeline? index=_internal source=*metrics.log* group=pipeline | timechart sum(cpu_seconds) by name... See more...
@Nawab  Useful Pipeline Searches with metrics.log:-  How much time is Splunk spending within each pipeline? index=_internal source=*metrics.log* group=pipeline | timechart sum(cpu_seconds) by name How much time is Splunk spending within each processor? index=_internal source=*metrics.log* group=pipeline | timechart sum(cpu_seconds) by processor What is the 95th percentile of measured queue size? index=_internal source=*metrics.log* group=queue | timechart perc95(current_size) by name    
@Nawab  metrics.log*:-  group=queue displays the data to be processed current_size can identify which queues are the bottlenecks blocked=true indicates a busy pipeline Checking metrics.log acros... See more...
@Nawab  metrics.log*:-  group=queue displays the data to be processed current_size can identify which queues are the bottlenecks blocked=true indicates a busy pipeline Checking metrics.log across the topology reveals the whole picture. An occasional queue filling up does not indicate an issue. It becomes an issue when it remains full and starts to block other queues. index=_internal source=*metrics.log host=<your-hostname> group IN(pipeline, queue) 02-23-2019 01:08:43.802 +0000 INFO Metrics - group=queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=968, largest_size=968, smallest_size=968 02-23-2019 01:10:39.802 +0000 INFO Metrics - group=pipeline, name=typing, processor=sendout, cpu_seconds=0.05710199999999998, executes=134716, cumulative_hits=1180897  
@Nawab Ensure there are no network connectivity problems between the UFs and the HFs. Sometimes, intermittent network issues can cause the UFs to get stuck. Check the queue size on the UFs. If the q... See more...
@Nawab Ensure there are no network connectivity problems between the UFs and the HFs. Sometimes, intermittent network issues can cause the UFs to get stuck. Check the queue size on the UFs. If the queue is full, the UF might stop processing new logs until there is space available. Even though you mentioned that CPU and RAM utilization is normal, it might be worth checking if there are any spikes or unusual patterns in resource usage.If the HF is overloaded, it might not be able to process logs from the UFs efficiently. Please check the queues on the UF and Heavy Forwarder (HF), as they are likely reaching capacity. Consider increasing the pipeline. Verify the metrics.log on the UF &  Heavy Forwarder to see if any queues are getting blocked. You can find the log at: cat /opt/splunk/var/log/splunk/metrics.log | grep -i "blocked=true"  
Hi @livehybrid , I heard that search time extractions are more better than index time due to performance issues? Is it so? Please clear fy
@splunklearner  If you go down the ingest time approach then you will add the props/transforms.conf within an app in your manager-apps folder on your Cluster Manager and then push out to your indexe... See more...
@splunklearner  If you go down the ingest time approach then you will add the props/transforms.conf within an app in your manager-apps folder on your Cluster Manager and then push out to your indexers. No changes should be required for your searchheads if you go down that route, but feel free to evaluate the alternatives provided in this post too. I hope this helps. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
How to install config explorer application on clustered enviornment? We have deployment server, cluster manager, deployer. We push apps through deployment server to CM and deployer. Please tell me... See more...
How to install config explorer application on clustered enviornment? We have deployment server, cluster manager, deployer. We push apps through deployment server to CM and deployer. Please tell me how to do this by downloading from Splunkbase?
Hi @kiran_panchavat can you please guide me where to add your stanza? Indexers or Search heads??
We have an environment where Splunk UF sends logs to HF and mostly UFs are stuck even HF and indexers are up, we need to restart the UFs to again send logs. Why uf are stuck even if indexer or HF is ... See more...
We have an environment where Splunk UF sends logs to HF and mostly UFs are stuck even HF and indexers are up, we need to restart the UFs to again send logs. Why uf are stuck even if indexer or HF is not available. CPU and RAM utilization is normal on server.
So should I give the following stanza in Deployer or cluster manager?
@splunklearner Yes, KV_MODE is for search time field extractions.  KV_MODE = [none|auto|auto_escaped|multi|multi:<multikv.conf_stanza_name>|json|xml] * Used for search-time field extractions only. *... See more...
@splunklearner Yes, KV_MODE is for search time field extractions.  KV_MODE = [none|auto|auto_escaped|multi|multi:<multikv.conf_stanza_name>|json|xml] * Used for search-time field extractions only. * Specifies the field/value extraction mode for the data. * Set KV_MODE to one of the following: * none - Disables field extraction for the host, source, or source type. * auto_escaped - Extracts fields/value pairs separated by equal signs and honors \" and \\ as escaped sequences within quoted values. For example: field="value with \"nested\" quotes" * multi - Invokes the 'multikv' search command, which extracts fields from table-formatted events. * multi:<multikv.conf_stanza_name> - Invokes a custom multikv.conf configuration to extract fields from a specific type of table-formatted event. Use this option in situations where the default behavior of the 'multikv' search command is not meeting your needs. * xml - Automatically extracts fields from XML data. * json - Automatically extracts fields from JSON data. * Setting to 'none' can ensure that one or more custom field extractions are not overridden by automatic field/value extraction for a particular host, source, or source type. You can also use 'none' to increase search performance by disabling extraction for common but nonessential fields. * The 'xml' and 'json' modes do not extract any fields when used on data that isn't of the correct format (JSON or XML). * If you set 'KV_MODE = json' for a source type, do not also set 'INDEXED_EXTRACTIONS = JSON' for the same source type. This causes the Splunk software to extract the json fields twice: once at index time and again at search time. * When KV_MODE is set to 'auto' or 'auto_escaped', automatic JSON field extraction can take place alongside other automatic field/value extractions. To disable JSON field extraction when 'KV_MODE' is set to 'auto' or 'auto_escaped', add 'AUTO_KV_JSON = false' to the stanza. * Default: auto
I don't have access to UI. I need to do it from backend only. Where I can give this props.conf? In cluster master or deployer? Is it index time extraction or search time?
Hi @kiran_panchavat , Thanks for the answer. But I read that kv_mode = json needs to be given on search time extraction i.e on search heads... But you are saying to give this on indexers or heavy f... See more...
Hi @kiran_panchavat , Thanks for the answer. But I read that kv_mode = json needs to be given on search time extraction i.e on search heads... But you are saying to give this on indexers or heavy forwarders... Will it help.. please clarify?
Hi @splunklearner  To have this processed at ingest time you can do a simple INGEST_EVAL on your indexers.   == props.conf == [yourStanzaName] TRANSFORMS = stripNonJSON == transforms.conf == [str... See more...
Hi @splunklearner  To have this processed at ingest time you can do a simple INGEST_EVAL on your indexers.   == props.conf == [yourStanzaName] TRANSFORMS = stripNonJSON == transforms.conf == [stripNonJSON] INGEST_EVAL = _raw:=replace(_raw, ".*- ({.*})", "\1")   Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will  
Did you go through the above response and have follow-up questions?