Hello! I have a cluster with 3000 volumes, and the Splunk Add-on for NetApp Data ONTAP only collects performance data for 50 volumes every collection interval. If I try to set perf_chunk_size_cluster_mode=500, then it collects performance data for 500 volumes and then stops. I want to collect performance for all of my 3000 volumes. After the collection is finished (usually within seconds) I get " [ getJob ] could not find a job to do , sleeping before retry" from the worker logs. What do I need to tweek in order to collect performance data from all of my 3000 volumes? Thanks, Sebastian
... View more
I have enabled windows auditing on a windows machine and mounted the directory where all logs are written to on a Ubuntu machine where splunk i installed. I am then monitoring the mounted audit file from the splunk instance. The monitored file is in XML-format, the events are single-line and the last line in the XML-file is always </Events> . Every new event is written before the last line so on the second last line.
The problem is that everytime new events are written to the monitored XML-file, Splunk re-indexes the entire file.
When i search for "index=_internal sourcetype=splunkd component=watchedfile" I get the result "INFO WatchedFile - Checksum for seekptr didnt't match, will re-read the entire file=' /mnt/netapp_audit/audit/audit_splunk_audit_last.xml'.
Other than that, the events are parsed correctly in Splunk.
Why is the entire file re-indexed everytime logs are written to the monitored XML-file?
Is it possible to get Splunk to only read events until the second last line?
... View more