All Apps and Add-ons

Addon Unix - what about the target index?

corti77
Communicator

Hi,

I am trying to monitor our Unix boxes (RedHat) without success.

I deployed the universal forwarder following the instructions (https://docs.splunk.com/Documentation/Forwarder/8.2.4/Forwarder/Configuretheuniversalforwarder).

I installed the RPM and register the deployment server  and the receiving indexer.

./splunk add forward-server <host name or ip address>:<listening port>
./splunk set deploy-poll <host name or ip address>:<management port>

I correctly see the new linux box in slpunk web in the forwarder management.
Then, I installed the Addon for Unix following the instructions too

https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Enabledataandscriptedinputs

I copied the addon to the folder C:\Program Files\Splunk\etc\deployment-apps and deployed it to the linux box using the Splunk Web. (I created the server classed and assign the client and the TA_nix )

Then , I logged into the linux box and I enabled the data input using the command line 
https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Enabledataandscriptedinputs

 

 

/splunk cmd sh $SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/setup.sh --enable-all

 

 

I restarted the splunk forwarder as indicated in the instructions.

Here is where I get lost...
I dont see any mention to which index should the events go. I dont see any new index created by the addon so I created the index "os" myself. is this correct?

I also added the index=os to all stanzas in the local/inputs.conf the events started to appear in the index "os".

is this the way to do it? some other actions that I missed?

Thanks a lot

Labels (3)
Tags (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @corti77,

no you have to do two activities:

  • this TA doesn't create the index and you have to manually create it on the Indexers, this is correct because you could have clustered or not clustered indexers,
  • you have to add the "index=os" to all the enabled stanzas of inputs.conf:
    • on Search Head you could also add by GUI
    • but on the clients, you have to manually add it to alla stanzas.

the link to documentation is https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/About

Ciao.

Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @corti77,

one question:

did you deployed the TA_nix add-on manually or using the Deployment Server?

if you used the DS, you have to manually modify inputs.conf file in the TA on the DS, changing the parameter "disabled=1" into "disabled=0" in all the stanzas you need for your monitoring, then you can deploy the app as usual.

If you manually deployed it, check that in inputs.conf file of the TA you have "disabled=0" in the corrects stanzas.

About the index, you have to put "index=os" in each stanza of the inputs.conf and deploy it as previously described.

Ciao.

Giuseppe

0 Karma

corti77
Communicator

HI @gcusello ,

thanks for your reply. What you explained is exactly what I did. I created the inputs.conf in the local folder with index=os in all stanzas and deploy the app to the target computers.

 

My question was about which index to used. I was expecting that the Addon might create the index and nothing is mentioned in the documentation. Where did you find the index=os? could you share the link to the documentation?

thanks

Jose

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @corti77,

no you have to do two activities:

  • this TA doesn't create the index and you have to manually create it on the Indexers, this is correct because you could have clustered or not clustered indexers,
  • you have to add the "index=os" to all the enabled stanzas of inputs.conf:
    • on Search Head you could also add by GUI
    • but on the clients, you have to manually add it to alla stanzas.

the link to documentation is https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/About

Ciao.

Giuseppe

PickleRick
SplunkTrust
SplunkTrust

The apps you deploy onto forwarders don't touch the indexers so there is no way for them to "create indexes" or do any other such thing.

That's why more complicated apps can have settings applicable to both UFs and HFs/indexers and you deploy them on both of those classes of splunk components and only the settings appropriate for particular layer are in force when the app is deployed.

But we're digressing.

If you have your inputs.conf defined within the app with index=os entry in the appropriate stanza and have the app deployed onto the forwarders here's what happens:

1) The forwarder starts reading/receiving events on a configured inputs.

2) The forwarders splits the input stream into single events, applies metadata you provided (in your case including index=os entry) and sends the event upstream (to configured output)

3) The indexer (I assume that you have a simple environment without any intermediate heavy forwarders) receives the data, applies any transformations it has defined for given source/sourcetype/host (I assume in your case there aren't any - you haven't installed any additional apps addressing this type of data or configured the indexers in any additional way) and tries to index it according to metadata it received it with.

4) In your case, since the events are marked with metadata index=os, the indexer tries to put the data in the "os" index. Since you didn't create the index, depending on your configuration, the data will either get discarded or will be indexed into a "last resort" index which is meant for such miscategorized data.

You should get errors in _internal index where the indexer is complaining about receiving events for non-existent index.

Long story short - if you tell splunk to send events to an index, create it first 😉

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...