I’ve encountered an issue while working on a configuration for a Splunk deployment. I was creating a stanza in the inputs.conf file within an app that would be pushed to multiple clients via the deployment server. The goal was to retrieve specific data across multiple clients. However, I noticed that the data retrieval wasn't working as expected.
While troubleshooting the issue, I made several changes to the stanza, including tweaking key values. In the process, I tried to change the source type in the stanza. Unfortunately, after making this change, all the events that had already been indexed and retrieved vanished.
I'm looking for guidance on how to recover the missing events or if there’s any way to prevent this in the future when modifying the source type in inputs.conf. Any insights or suggestions on how to address this would be greatly appreciated!
Thank you in advance for your help!
Did you make any changes to your indexes.conf ? Is it possible something else could have changed in the system?
Changing the configuration of your inputs.conf can not result in existing data being removed.
Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards
Will
I only edited the inputs.conf in a specific app
Yup. That shouldn't have had anything to do with already indexed data - it's on "the other end" of Splunk.
There is also another possibility - especially if there are more people involved in your environment. While the immediate change might have been in one place (inputs.conf) there could have been some changed made earlier in the config files but not commited to the runtime configuration. And when you restarted your Splunk instance new config file versions were read and applied.
Anyway, if you're on a fairly modern Splunk version, you can check the _configtracker index to see what changes were made to your environment around that time you edited the inputs.conf.
In that case, the existing data should not have been lost!
Assuming you are using the same login, with the same permissions then there shouldnt be an issue with RBAC/permissions.
If you didnt make changes to indexes.conf then the index should still exist. You mentioned the retention is set to 6 months - I assume you have been getting data in fairly recently?
Are you seeing data in other indexes from other inputs? Can you see your forwarders sending their _internal logs?
Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards
Will
This will show all events in the index, regardless of sourcetype. Look for your "missing" events and note their sourcetype. It’s probably the original one you had before the change.
Search explicitly for the old and new sourcetypes
index=<your_index> sourcetype=<old_sourcetype>
index=<your_index> sourcetype=<new_sourcetype>
This will confirm whether the old events are still there and whether new events are coming in under the updated sourcetype.
Preventing Future Issues
Before pushing changes via the deployment server, test the stanza in a non-production Splunk instance. Simulate the data input and verify the results.
I have searched for the events using both new and old sourcetypes and date range of (All Time) to no avail. I am positive I didn't delete anything manually.
What are your index settings? It could just be that the data retention period expired , perhaps there was too much data, or it was too old, causing the oldest events to be pushed out of the indexes.
"There was too much data, or it was too old, causing the oldest events to be pushed out of the indexes."
Regarding this line, how is it relevant if it is too much data and will it matter if the data of the event log itself is old however it was only indexed 1 week ago? Does this mean that some event logs could be overwritten?
Data in Splunk is stored into so called buckets. Each bucket can be in one of four states. Initially it's a hot bucket. Then it gets rolled to warm. Then older buckets get rolled to cold (possibly on a different storage volume). And finally as the index or volume reaches its size limit or data age reaches its defined limit the bucket is moved to frozen (by default it means it's simply deleted but it can also be moved onto yet another storage which is completely outside of Splunk's "jurisdiction" - from your Splunk's point of view that data is deleted but can be retained and manually "thawed" later).
So yes, as your data ages in Splunk it may reach the point at which it is deleted.
Something else must have happened then. Splunk on its own only deletes data in two cases:
1) You use the delete command (actually even then they are not physically deleted from the index files, just marked as unsearchable but the net effect is the same - you can't search for those events).
2) They are rolled to frozen due to data retention policy.
And generally, any change on forwarders will not cause changes on indexers and/or search heads. They are separate components so if you only pushed the configs to forwarders, there's no way it should cause "disappearance" of your events.
Maybe other changes were introduced around the same time. Most importantly, do you have permissions for the index?
I am pretty sure that it wont roll to frozen state yet as it was just indexed a week ago and the retention policy is over 6 months.
I never even used the delete command before. And yes I have permissions.