Getting Data In

How to optimize the ingestion of data from Splunk Universal Forwarder to Splunk Indexer

sdhiren
Explorer

I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried setting the 

 

 

parallelIngestionPipelines = 2

 

 

setting for both Indexer and Forwarder, but to no avail. 

Below are the stats for the containers running Indexer and forwarder

CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ecb272b9ca6b tracing-splunk-1 12.15% 260.8MiB / 7.674GiB 3.32% 366MB / 1.85MB 0B / 1.01GB 239

CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0ac17f935889 tracing-splunkforwarder-1 0.70% 68.22MiB / 7.674GiB 0.87% 986kB / 312MB 0B / 18.2MB 65

We are running these in a docker container

I and my team is pretty new to Splunk eco system. Can someone please help us to optimize the ingestion of logs.

Labels (2)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Make sure you have this in limits.conf on the UF

[thruput]
maxKBps = 0

 

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Make sure you have this in limits.conf on the UF

[thruput]
maxKBps = 0

 

---
If this reply helps you, Karma would be appreciated.

sdhiren
Explorer

thanks @richgalloway  with that change,  around 5 million logs were ingested in couple of mins. 

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...