Getting Data In

UF versus HF processing

craigkleen
Communicator

Currently, my firewall logs (PaloAlto) are sent via syslog to a virtual Linux machine.  On that machine, I run a full version of Splunk (Heavy Forwarder 8.x) that sends into separate indexers.

I was planning to migrate the syslog data to new Linux servers and use Universal Forwarder instead, but running into what looks like some serious performance issues.  The UF will send a big chunk of data to start, but then the index stops receiving from the UF.

I tried the post at https://community.splunk.com/t5/Getting-Data-In/Universal-Forwarder-ParsingQueue-KB-Size/td-p/50410 to increase the size of the parsingqueue, but that didn't help.  

I'm not quite sure what to look at next.  Maybe the stream is too much for UF to handle?  I haven't found anything definitive on that subject.

Labels (1)
0 Karma
1 Solution

scelikok
SplunkTrust
SplunkTrust

Hi @craigkleen,

Are you using the same outputs.conf and limits.conf on both servers? UF has default bandwidth limit for 256KB/s. Since HF does not have this limit, you have to add this on UF instance.

limits.conf

[thruput]
maxKBps = 0

 

If this reply helps you an upvote and "Accept as Solution" is appreciated.

View solution in original post

scelikok
SplunkTrust
SplunkTrust

Hi @craigkleen,

Are you using the same outputs.conf and limits.conf on both servers? UF has default bandwidth limit for 256KB/s. Since HF does not have this limit, you have to add this on UF instance.

limits.conf

[thruput]
maxKBps = 0

 

If this reply helps you an upvote and "Accept as Solution" is appreciated.

craigkleen
Communicator

That was the ticket.

Under ${SPLUNK_HOME}/etc/system, the limits.conf was the same.  But, on the UF, under ${SPLUNK_HOME}/etc/apps/SplunkUniversalForwarder/ the default was overridden to 256K.  So, I made a local directory and updated there.

So, thanks for the pointer!

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

on UF are you receiving syslog via native syslog and then reading those from file or directly UF’s udp/tcp listener?

r. Ismo

0 Karma

craigkleen
Communicator

On both, that's the usual process.  Native "rsyslog" daemon writing to a file, and UF then reading that.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
And how much you have that traffic (EPS + size)? Which kind of host/fs/IOPS? And is the HF equal with UF?
0 Karma

craigkleen
Communicator

From a machine standpoint, the HF and UF are the same.  Both are virtual servers that are clones of each other.  The only difference is the Linux version (going from RHEL6 to RHEL8).

If I splunk:  host=HF index=_internal eps=* group=per_source_thruput source=panfwlog

The max EPS I get is right around 1,200.  

A similar search with host=UF during the time the firewall is sending to this new server, is showing me EPS under 4?  Super weird.

The data is getting written to disk, and when I switch the firewall back to the old server, the UF eventually does catch up, but it's not reading like the HF does.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...