Hello.
I've been working on a case with Splunk support for a week or two that involves the receiver port on one or more indexers getting plugged up and not taking new events for a while from transmitting universal forwarders.
I won't go into all the details of that case, but I need to collect additional netstat information for the very intermittent times this happens. I have some other non-Splunk-y ways I could do this, but processing the results would be easiest if they were in Splunk. Since this is intermittent it would be far more data than I'd need, but whatever might be easiest...
If I were to use the Splunk App for *nix, and its netstat script, to gather this information on indexers, what happens when this receiver port issue occurs? Does the output from a generating script somehow depend on the receiver port (9997), or in the case of a local event source on an indexer, is this handled internally? If it depends on the receiver port somehow, then I definitely need to go with another approach.
Thanks
It doesn't. You can test this by setting up a dev box with no receiving port and you will see scripted inputs still get ingested.
Now that said, if I were you I would probably keep the troubleshooting stuff seperate from splunk incase the big does also affect your data collection.
Maybe just run the Unix TA script using Cron and have it write to a file that you invest later? Would be an easy change.
Good luck
It doesn't. You can test this by setting up a dev box with no receiving port and you will see scripted inputs still get ingested.
Now that said, if I were you I would probably keep the troubleshooting stuff seperate from splunk incase the big does also affect your data collection.
Maybe just run the Unix TA script using Cron and have it write to a file that you invest later? Would be an easy change.
Good luck
Thanks for the info.
Great suggestion. I'll give that a shot.
Thanks