Getting Data In

How to optimize the ingestion of data from Splunk Universal Forwarder to Splunk Indexer

sdhiren
Explorer

I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried setting the 

 

 

parallelIngestionPipelines = 2

 

 

setting for both Indexer and Forwarder, but to no avail. 

Below are the stats for the containers running Indexer and forwarder

CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ecb272b9ca6b tracing-splunk-1 12.15% 260.8MiB / 7.674GiB 3.32% 366MB / 1.85MB 0B / 1.01GB 239

CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0ac17f935889 tracing-splunkforwarder-1 0.70% 68.22MiB / 7.674GiB 0.87% 986kB / 312MB 0B / 18.2MB 65

We are running these in a docker container

I and my team is pretty new to Splunk eco system. Can someone please help us to optimize the ingestion of logs.

Labels (2)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Make sure you have this in limits.conf on the UF

[thruput]
maxKBps = 0

 

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Make sure you have this in limits.conf on the UF

[thruput]
maxKBps = 0

 

---
If this reply helps you, Karma would be appreciated.

sdhiren
Explorer

thanks @richgalloway  with that change,  around 5 million logs were ingested in couple of mins. 

0 Karma
Get Updates on the Splunk Community!

Combine Multiline Logs into a Single Event with SOCK - a Guide for Advanced Users

This article is the continuation of the “Combine multiline logs into a single event with SOCK - a step-by-step ...

Everything Community at .conf24!

You may have seen mention of the .conf Community Zone 'round these parts and found yourself wondering what ...

Index This | I’m short for "configuration file.” What am I?

May 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a Special ...