Hello all, I need to preface this with the disclaimer that I am a relative Splunk neophyte so if you can / do choose to help, do not hesitate to keep it as knuckle-dragging / mouth-breather proof as possible.....
Issue:
An individual machine with a UF instance appears to have only sent security logs from around Apr 2022 to be ingested, despite:
(a) the Splunk instance on this local machine has been up and running since 2019
(b) the Splunk ES architecture has been in place and running since 2016, but none of those who implemented it remain, nor is there any usable documentation on exactly how/ why certain configuration choices were made
To comply with data retention requirements we need to ensure that all previous local security logs from 2019 until now are ingested, confirmed to be stored, and then ideally deleted from the local machine to save storage space.
(a) the logs which seem to not have been ingested have been identified and moved to a separate location from the current security log.
Question:
What is the most efficient and accurate way of ensuring these logs are actually ingested in a distributed environment? When looking through the documentation / various community threads and the Data Ingestion options (on our Deployment Server, License Master, various Search Heads, Heavy Forwarders, Indexers, etc.) I can't find anything that deals specifically with the situation I seem to be facing (existing deployment, select file ingestion from a specific instance) apart from physically going to the machine, which can be.....difficult.
Any help / information / redirection would be greatly appreciated.
Should give you all your event logs reingested (assuming that you haven't configured the inputs to pull only current logs)
Hi @bamflpn18,
if you need to understand your architecture, you could see if Monitoring Console is configured.
To check the data, you could run a simple search:
| metasearch index=*
| stats values(index) AS index values(host) AS host count BY sourcetype
in this way you have an overview of all the Data Flows are active in your network and from which hosts and where logs are stored.
In addition ES is a very hard App that's largely evolved in tha last years and also Data Ingestion Apps (TAs).
For all these reasons, I think that this job requires at least a Splunk Architect and in Community you can find only some idea, but you need a good Splunk architecture and Data Ingestion Knowledge.
Ciao.
Giuseppe
Sir:
I apologize for the delayed response, but....life.
re: Monitoring Console / Index to Host by Sourcetype
I had pulled some of our basic information from a similar string, as well as pulling up the MC to attempt to view / configure the forwarders / indexes / data sources etc. One of the things that we've found across the environment is individually (and often mis-) configured .conf files and general settings across all tiers of instances, resulting in what appears to be some things going THIS way, others THAT way, and a quite large spectrum of indexes which have sporadic ingestion.
We're working to get hold of one of the subject matter experts via Splunk OnDemand, but due to our turnover of personnel we're still working through transferring licenses / accounts. Unfortunately this specific event requires resolution prior to the anticipated completion of that part.
I appreciate the quick response and information
Should give you all your event logs reingested (assuming that you haven't configured the inputs to pull only current logs)
Sir:
I am sorry for the delayed response, but...life.
That is precisely what I was looking for and makes sense even to me. I have not had an opportunity to implement / validate it on our setup, but I'm optimistic for the first time in a while.
Much appreciated for the information and redirection.