- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried setting the
parallelIngestionPipelines = 2
setting for both Indexer and Forwarder, but to no avail.
Below are the stats for the containers running Indexer and forwarder
CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ecb272b9ca6b tracing-splunk-1 12.15% 260.8MiB / 7.674GiB 3.32% 366MB / 1.85MB 0B / 1.01GB 239
CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0ac17f935889 tracing-splunkforwarder-1 0.70% 68.22MiB / 7.674GiB 0.87% 986kB / 312MB 0B / 18.2MB 65
We are running these in a docker container
I and my team is pretty new to Splunk eco system. Can someone please help us to optimize the ingestion of logs.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Make sure you have this in limits.conf on the UF
[thruput]
maxKBps = 0
If this reply helps you, Karma would be appreciated.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Make sure you have this in limits.conf on the UF
[thruput]
maxKBps = 0
If this reply helps you, Karma would be appreciated.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
thanks @richgalloway with that change, around 5 million logs were ingested in couple of mins.
