The universal forwarder which was installed on "server A" was uninstalled on 14th May due to some issue.
So post 14th May logs from the "server A" was not being indexed in Splunk.
On 30th May, we re-installed the universal forwarder on "server A" but there was a huge spike in the data ingested for the next couple of days.
If the daily ingestion rate was 1GB per day, it started ingesting at the rate of approx. 15GB per day for the next 2 days.
Moreover the source from where the logs are ingested on "server A" keeps 1 day worth of data.
So can somebody please explain, for the above scenario, how the indexing of the data increased almost 15 times?
Did you see any data being duplicated? You can look at licensing usage (index=_internal source=*license_usage.log) for the sources (files) so see if you got historical data being ingestetd (or run tstats command to see you got data for just those 2 days or for all the missing days from may 14th).