Getting Data In

Splunk indexing more than normal amount of data after re-installation of the universal forwarder


The universal forwarder which was installed on "server A" was uninstalled on 14th May due to some issue.
So post 14th May logs from the "server A" was not being indexed in Splunk.
On 30th May, we re-installed the universal forwarder on "server A" but there was a huge spike in the data ingested for the next couple of days.
If the daily ingestion rate was 1GB per day, it started ingesting at the rate of approx. 15GB per day for the next 2 days.
Moreover the source from where the logs are ingested on "server A" keeps 1 day worth of data.

So can somebody please explain, for the above scenario, how the indexing of the data increased almost 15 times?

0 Karma

Revered Legend

Did you see any data being duplicated? You can look at licensing usage (index=_internal source=*license_usage.log) for the sources (files) so see if you got historical data being ingestetd (or run tstats command to see you got data for just those 2 days or for all the missing days from may 14th).

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...