Getting Data In

What is the best practise for monitoring a file directly on the indexer machine(s)?

hettervik
Builder

I need to monitor a file directly on the indexer. I know I can just define an inputs.conf on the indexer itself and read the file. Later on, if I'm upgrading to an indexer cluster, could this create problems? Would the data inputs from the file still be duplicated over the different indexers when reading a file like this (as opposed to receiving data on port 9997 from an UF)?

It feels kind of like a hack to push inputs konfiguration from the Cluster Master, but I guess the alternative would be to install an UF on the machine as well as the Splunk Enterprise instance for the indexer, then the input would be load balanced as well, though I think this solution would be a bit of an overkill.

What is the best practise for doing this?

0 Karma
1 Solution

gjanders
SplunkTrust
SplunkTrust

If the inputs.conf can run on every machine in the cluster than use the inputs.conf via the master-apps/... directories,
If the inputs.conf needs to run on a single indexer then you could use $SPLUNK_HOME/etc/system/local/inputs.conf (obviously not cluster aware), or you could have a forwarder installed to send data to the indexer.

If you have the scenario where it can only run on a single indexer I'd use the system/local/inputs.conf before installing a UF only for that purpose.
If you have the scenario where the inputs runs on every cluster member then clearly it should be in the inputs.conf in one of the apps of the master-apps directory

View solution in original post

0 Karma

inventsekar
Ultra Champion

We may never need to install UF on the indexer. With config files, we can achieve the required monitoring of the indexer.

(Any monitoring solution should do their own self monitoring as well.. (Reminds me Dan Brown's quote... who will guard the guards?!)

Hot buckets and replication.....

Hot buckets are replicated too.
(The replication is not per-event but a certain slice of data.) See http://docs.splunk.com/Documentation/Splunk/6.0/Indexer/Howclusteredindexingworks for more information.
(From another post ... https://answers.splunk.com/answers/121862/hot-buckets-replications.html )

0 Karma

gjanders
SplunkTrust
SplunkTrust

If the inputs.conf can run on every machine in the cluster than use the inputs.conf via the master-apps/... directories,
If the inputs.conf needs to run on a single indexer then you could use $SPLUNK_HOME/etc/system/local/inputs.conf (obviously not cluster aware), or you could have a forwarder installed to send data to the indexer.

If you have the scenario where it can only run on a single indexer I'd use the system/local/inputs.conf before installing a UF only for that purpose.
If you have the scenario where the inputs runs on every cluster member then clearly it should be in the inputs.conf in one of the apps of the master-apps directory

0 Karma

hettervik
Builder

Do you know if cluster replication of the logs/buckets still happen if the data is onboarded in this way, directly from inputs.conf on the indexer?

I'm assuming the data is only indexed on the indexer reading the data, but perhaps the bucket is replicated after closing and moved from hot to warm. I can't seem to remember exactly how bucket replication works.

0 Karma

gjanders
SplunkTrust
SplunkTrust

As per investsekar's post the buckets will be replicated hot assuming the repFactor=auto on the required index!

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...