I am using REST API Modular Input to sample Twitter per http://blogs.splunk.com/2014/07/03/splunking-social-media-tracking-tweets/ on a Standalone Search Head. It all works great.
How would I configure this on a search head cluster? Would I add inputs.conf on each Search Head in my cluster? How would we be sure each head didn't run the rest command but the results would be available across all heads.
The inputs.conf has the stanza
[rest://TwitterFeed]
In a distributed architecture you should install the App on a Forwarder and edit inputs.conf directly rather than using the config UI.
In a distributed architecture you should install the App on a Forwarder and edit inputs.conf directly rather than using the config UI.
Is there a way to have high availability for the REST Modular Input(without resorting to a load balancer)? What if the standalone search head/forwarder goes down?
So I tried on a forwarder but was getting odd python errors. Does the REST API Modular Input have to be on a full Splunk install or will a splunk forwarder install suffice?
"...Unlike full Splunk Enterprise, the universal forwarder does not include a bundled version of Python...."
UF's need a python runtime on the OS.
Heavy Forwaders do not.
Thanks. So the problem was the default python was 2.6. Once I pointed /usr/bin/python 2.7 I got it to work. Thanks!