Getting Data In

"Indexer was started dirty"

krussell101
Path Finder

I have no clue what this error means.

The entire error in the splunkd.log is:

Indexer was started dirty, searches may not be accurate. Consider restarting Splunk and accepting the recovery request.

When I stop and restart splunk I'm not offered a recovery option.

I am getting these on virtually every server where I'm running splunk. Heavy forwarders and the indexer itself. The only exception are the two servers where I am running universal forwarders.

What does it mean and how do I clear it?

Thanks!!!

Tags (1)
0 Karma
1 Solution

Drainy
Champion

Have a read of;
http://docs.splunk.com/Documentation/Splunk/latest/admin/HowSplunkstoresindexes#Troubleshoot_your_bu...

It sounds like you need to do a complete fsck of your buckets, this can take a few hours though depending on how big they are so set aside some time for it. It sounds like Splunk isn't being shut down cleanly or the servers are crashing out.

View solution in original post

Drainy
Champion

Have a read of;
http://docs.splunk.com/Documentation/Splunk/latest/admin/HowSplunkstoresindexes#Troubleshoot_your_bu...

It sounds like you need to do a complete fsck of your buckets, this can take a few hours though depending on how big they are so set aside some time for it. It sounds like Splunk isn't being shut down cleanly or the servers are crashing out.

krussell101
Path Finder

perfect! The page you reference suggests splunk fsck with the rebuild option.

I ran it with --repair --all on each server and that did the trick.

On several of the servers, there were no errors when splunk was started (before running fsck). So the only evidence of a problem was the log entry.

Interesting.

At any rate. Thanks very much for taking the time to help.

Much appreciated.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...