I just moved all logs from index main to index OS1, but it appears that every now and then a couple of rogue events will find their way into main and not OS1. In the last 17hrs 13 events from different servers have found their way into main. All clients are pointed to the new index.
Could this be a sign that I need another index or to beef up the index that I have? I am running Splunk 6.2.3 on a 6CPU 12GB virtual machine with 1 NIC.
You have a timestamp problem where you are seeing events that were indexed in the past with a timestamp in the future. As these events trickle from the future to the present, you are noticing them (even though they were indexed long ago, before you made the OS1
change). You can check this with a search like this:
... | eval lagSecs = _indextime - _time | stats avg(lagSecs) by sourcetype host index
You are going to see some negative numbers which is IMPOSSIBLE. This means that events were indexed before they happened. That is your problem.
You have a timestamp problem where you are seeing events that were indexed in the past with a timestamp in the future. As these events trickle from the future to the present, you are noticing them (even though they were indexed long ago, before you made the OS1
change). You can check this with a search like this:
... | eval lagSecs = _indextime - _time | stats avg(lagSecs) by sourcetype host index
You are going to see some negative numbers which is IMPOSSIBLE. This means that events were indexed before they happened. That is your problem.