In my NFS server, Splunk UF is installed. That NFS server is basically a log storage server,
log rotation daemon also running on that server that convert file to gzip file after 24 hours, in same location.
NFS server is a single server, and it have really big amount of data.
But some time my UF don't forward some files data from NFS server to my Indexers server.
Many files remain missing in my Splunk indexers.
Following parameters are same for many of the sourcetype in props.conf ( Yes, many events are really big)
TRUNCATE = 20000
MAXEVENTS = 512
BREAKONLY_BEFORE = < [Set] >
Please suggest how I improve my UF performance.
What do the
inputs.conf entries look like?
What is the system load on your log aggregator?
How big is your Splunk env?
What network components (firewalls, load balancers, etc) are between the collector and Splunk?
In my splunk env I have one NFS server (for log collection), in that server UF is installed. That contain input.conf file, props.conf . file. In input.conf file, we have to monitor some directory that forward data to particular Index, using sourcetype define in props.conf.
That NFS server is on premise server, its forward data on 6 indexer, indexer are EC2 instances, that share same AWS- route53 , in round-robin technique (So no ALB/NLB/ELB in between indexers). Yes I can manage that NFS server also using one Splunk-Master server. Indexer cold and frozen bucket are AWS-EFS drives, that are same between all indexers. Apart of this some Search Head servers, yes SH connected to ALB and then route 53.
In indexer server I continuously store data of other on premise servers, AWS-Servers, Openshift Server, DB Servers, SysLogs servers.
You have to have good hygiene for old logs. Hundreds of co-resident logs is fine, thousands is risky, above that you will experience a total breakdown in the UF's ability to search through them and send updates in a timely manner. Even if the
*.tgz does not match your
monitor pattern, they will still cause this problem unless you MOVE THEM SOMEWHERE ELSE.