Getting Data In

Some of my indexes have stopped indexing...

balbano
Contributor

For some reason, looks like 2-3 of my indexes have stopped indexing. The monitor point to the indexes is pointed to directories storing syslog-ng files, and all my indexes currently indexing are setup the exact same way. I tried looking in my indexer's splunk logs but can't find anything helpful. Has this ever happened to anyone before? What are typical reasons an indexer would stop indexing? networking is not an issue since the indexer is indexing local files. I see that splunk service is working fine, and my other indexes are indexing just fine... I see splunk ports are open... so what am I missing here? I am super stumped on this one.

Appreciate a point in the right direction.

Let me know if you need more details to help arrive at the cause of the issue.

Thanks guys.

Brian

Tags (1)
0 Karma
1 Solution

balbano
Contributor

Looks like we fixed the problem. For some reason, Splunk did not like the symlinks that were created after we temp moved the log data to NFS. Even though we kept the symlinks the same and consistent with what was in the inputs.conf file, the indexes were not indexing even though the other indexes that were setup the exact same way were indexing.

We created a new index and pointed the data input point to the new index and it appears to be working.

I have opened a support ticket to Splunk to let them know as it could be a possible bug.

Thanks for all of the help.

Brian

View solution in original post

0 Karma

balbano
Contributor

Looks like we fixed the problem. For some reason, Splunk did not like the symlinks that were created after we temp moved the log data to NFS. Even though we kept the symlinks the same and consistent with what was in the inputs.conf file, the indexes were not indexing even though the other indexes that were setup the exact same way were indexing.

We created a new index and pointed the data input point to the new index and it appears to be working.

I have opened a support ticket to Splunk to let them know as it could be a possible bug.

Thanks for all of the help.

Brian

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Most commonly, it's just because the free space on disks dropped below the limit (default of 2000 MB) on any one of the index volumes or the $SPLUNK_HOME/var/run volume. However, it's not usual that only some of the indexes stop, it will be all of them, and I'm a little unclear if you have multiple indexes on the indexer, or multiple indexers, and which exact situation you're running into.

balbano
Contributor

yes we have 2 indexers both with multiple indexes mirrored for load balancing purposes. I have reason to believe the issue may somehow lie in the fact that we temporarily moved the log data off to NFS so that we could rebuild the disks to RAID-10. It is difficult to see if the indexes not indexing have become corrupt somehow since the logs haven't help much, so I am going to create a new index and point the log data to that index to see if that works out. I'll let you know how that works out.

Let me know if you want me to try other approaches. Thanks gkanapathy.

0 Karma
Get Updates on the Splunk Community!

What's New in Splunk Cloud Platform 9.2.2403?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.2.2403! Analysts can ...

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...

Edge Processor Scaling, Energy & Manufacturing Use Cases, and More New Articles on ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...