To reply to your question about latency: Events from tracker.log have not been seen for the last 546 seconds, which is more than the red threshold (210 seconds). Events from tracker.log are delayed for 32126 seconds, which is more than the red threshold (180 seconds). The regex is efficient, i tried it on regex101. On indexing time, there is only one regex that i wrote for firewall incoming data to only accept blocked traffic logs. And because there are a lot of logs sent by the firewall, the indexer should filter all of those logs on indexing time to filter them and only take the blocked traffic to index it. Question: How can adding an indexer help me in this case, will the two indexers work on filtering logs in indexing time together?
... View more
Hello, I have that limit of license indexation per day. So i wanted to limit data to be indexed from a specific Equipment. I received a great amount of logs from a source equipement using syslog (i can't change which types of logs to be sent to splunk). So, to limit the amount of data being indexed. i filtered data in the indexation phase using splunk. I added a regex in splunk so that splunk only indexes the wanted types of logs and ignore other received sylog logs from that specific equipment. I did this using TRANSFORMS-set in props.conf and using the regex expression in transforms.conf file. As a result, i had the following errors in splunk health that i couldn't fix: Ingestion Latency Events from tracker.log have not been seen for the last 2940 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked. TailReader-0 The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Whenever i remove the regex expression the problem is solved => meaning that the regex is the only source of this problem/error. Thank you in advance for help.
... View more