Getting Data In
Highlighted

Indexer is slowing down

Communicator

I'm having problems with indexing a particular log source, which is slowing down. It started off strong but continues to drop hourly. My main concern is the log files that are starting to accumulate on the forwarder, which is using the batch stanza. Here is the content of the indexers indexes.conf file.

[default]
maxTotalDataSizeMB = 27000000
frozenTimePeriodInSecs = 18869760000

Is there anything I can do to increase thruput for a specific source?

Tags (1)
Highlighted

Re: Indexer is slowing down

Splunk Employee
Splunk Employee

Tuning the indexes.conf file will not speed up indexing. If you are having a problem with indexing speed, you should check the internal metrics as well as system resources. If you have enabled the lightweight forwarder app, it is possible that your thruput limit is set to 256 kbps. Without complete details regarding the log source, a more complete answer is difficult to supply.

Highlighted

Re: Indexer is slowing down

Communicator

I'm using a regular forwarder. This log source seems to be the only one on the indexer that is slowing down. Thruput started off high but continues to dwindle. I've ran some searches using the internal metrics, such as looking at thruput and indexing speeds. Any other recommended searches would be helpful.

0 Karma
Highlighted

Re: Indexer is slowing down

Communicator

All the internal metrics searches I've ran seem to tell me I have a problem.

0 Karma
Highlighted

Re: Indexer is slowing down

Super Champion

Could you add some additional info about the specific metrics you are looking at.

Highlighted

Re: Indexer is slowing down

Super Champion

Please add to your question: (1) version of splunk indexer, (2) version of your forwarder. (3) why you suspect this to be an indexing performance issue and not a monitor (or batch) performance issue.

Highlighted

Re: Indexer is slowing down

Communicator

What type of input is this. We have noticed a slowdown in monitors where there are hundreds (even thousands) of files being monitored. Solution was to remove some of the files being monitored, because they were old rotated log files, and once splunk has them,we don't really care about the source file anymore.

0 Karma