Hello,
we are forwarding Logs from a host via universal forwarder. As the universal forwarder is not able to filter events(logs we went for adjusting tarnsforms.conf and props.conf
After editing those files we indeed only ingested the expected and desired logs according to the RegEx in transforms. However the indexed volume stayed the same.
So i tried to send all events to the nullqueue and check the indexed volume again. For some reason even with zero events the query for indexed volume still is very high.
Here the snippets from the relevent files and queries:
1. search query for getting indexed volume:
index="_internal" source="*metrics.log" per_index_thruput series=<my index>
| eval GB=kb/(1024*1024)
| timechart span=2min partial=f sum(GB) by series
2. rather boring one => the search to check on event count
index=<my index>
| stats count
3. stanza in transforms.conf (to kill all events for testing)
[<my transformation>]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue
4. stanza in props.conf for sourcetype
[<my sourcetype>]
TRANSFORMS-setnull = <my transformation>
------------------------------------------------------------------------
I also tried with TRANSFORMS-set...no idea what the difference between the two is, but that doesn't work as well.
So the nullqueue is working as i have no events in the index, however the query for indexing volume is off the charts.
Any help would be apriciated.
Thanks,
Mike
Check which machine is logging those metrics.log entries: UFs can generate per_index_thruput as well, which would be the volume prior to filtering.
Check which machine is logging those metrics.log entries: UFs can generate per_index_thruput as well, which would be the volume prior to filtering.
The only relevant number for "paying license usage" is "RolloverSummary" which is calculated once a day and written to "license_usage.log" and "license_usage_summary.log" by the License Manager.
Hi Martin,
you are dead on! All the forwarders were listed under hosts in the Query for the throughput.
When i only searched for splunk host the graph made much more sense.
Thanks,
Mike