How to Inspect each feed by different criteria:
Average ingestion rate per day, Minimum event size, 24 hour period
Average event size, 24 hour period, Maximum event size, 24 hour period, Median event size, 24 hour period
Standard Deviation of event size, 24 hour period based on sourcetype or source
Most of those can be collected with three commands. The rest depends on what is meant by "average ingestion rate" (what is being averaged over what period).
<<your search>>
| bin span=24h
| eval size=len(_raw)
| stats min(size) as MinEventSize, avg(size) as AvgEventSize, max(size) MaxEventSize, median(size) as MedianEventSize, stdev(size) as StdevEventSize by _time,sourcetype
Most of those can be collected with three commands. The rest depends on what is meant by "average ingestion rate" (what is being averaged over what period).
<<your search>>
| bin span=24h
| eval size=len(_raw)
| stats min(size) as MinEventSize, avg(size) as AvgEventSize, max(size) MaxEventSize, median(size) as MedianEventSize, stdev(size) as StdevEventSize by _time,sourcetype
When I look at the average and max size of the events, I see that the Max event size sometimes is exactly 300,000 bytes, which is suspicious. Please let me know did the event fields change ? if we receiving 10 events in one chunk of 300,000 bytes ?
It could be the events are being truncated. Check the TRUNCATE setting in the relevant props.conf stanza.