Hello all, I need to preface this with the disclaimer that I am a relative Splunk neophyte so if you can / do choose to help, do not hesitate to keep it as knuckle-dragging / mouth-breather proof as possible.....
Issue:
An individual machine with a UF instance appears to have only sent security logs from around Apr 2022 to be ingested, despite:
(a) the Splunk instance on this local machine has been up and running since 2019
(b) the Splunk ES architecture has been in place and running since 2016, but none of those who implemented it remain, nor is there any usable documentation on exactly how/ why certain configuration choices were made
To comply with data retention requirements we need to ensure that all previous local security logs from 2019 until now are ingested, confirmed to be stored, and then ideally deleted from the local machine to save storage space.
(a) the logs which seem to not have been ingested have been identified and moved to a separate location from the current security log.
Question:
What is the most efficient and accurate way of ensuring these logs are actually ingested in a distributed environment? When looking through the documentation / various community threads and the Data Ingestion options (on our Deployment Server, License Master, various Search Heads, Heavy Forwarders, Indexers, etc.) I can't find anything that deals specifically with the situation I seem to be facing (existing deployment, select file ingestion from a specific instance) apart from physically going to the machine, which can be.....difficult.
Any help / information / redirection would be greatly appreciated.
... View more