Deployment Architecture

Where do I deploy scripted inputs in a Splunk 6.2.4 environment with both indexer and search head clustering?

dturner83
Path Finder

We've got a 3 indexer - 3 search head setup on 6.2.4 using indexer clustering, search head clustering, and a deployer to configure search heads.

I've got a scripted input that reaches out for data via 3rd party api and then returns that data to be indexed.

My question is where should this go? If I deploy it via deployer on the search heads, each search head indexes the data locally in it's main index. If I put it on the indexer it doesn't appear to run and doesn't put any data into an index.

1 Solution

bmacias84
Champion

I would never recommend putting script which collect data on any cluster member search or indexer. Instead I would recommend setting up a Universal or Heavy Forwarder for all third-party API or inputs from a remote machine. This will prevent accidentally index data multiple times. At this point you could use the Deployment Server.

View solution in original post

frobinson_splun
Splunk Employee
Splunk Employee

Hi there, @dturner83,

After consulting with my colleagues, we have a few suggestions for you.

--You should be able to use your script on the indexer. Provided that the script is returning results, it seems like there might be an issue with the results being handled as inputs for the indexer. This means you might want to double check settings in inputs.conf.

Here are some documentation resources about setting up inputs and configuring inputs.conf:
http://docs.splunk.com/Documentation/Splunk/6.2.4/Data/Setupcustominputs
http://docs.splunk.com/Documentation/Splunk/6.2.4/admin/Inputsconf

Give those a try and let me know if it doesn't help. We can go from there.

Also, we had a general suggestion to consider for your deployment. Ideally, you could put the scripted input on a forwarder for more optimal flexibility and capacity. Generally, it's helpful to have these processing tiers in place:
--Forwarders handling data inputs (including scripted inputs).
--Indexers indexing the data coming from the forwarders.
--Search heads searching the indexed data on the indexers.

I really hope this helps! Let me know either way.

All best,
@frobinson_splunk

frobinson_splun
Splunk Employee
Splunk Employee

Also, many thanks to my colleague, Steve G., for his help with this answer!

0 Karma

lguinn2
Legend

Since you are using both indexer clustering and search head clustering, there is no "one place" to run the scripts within Splunk.
However, you could do this:

Pick one indexer to run the script. Set up the script to run as a regular, scheduled job (cron or whatever) using the operating system (Linux, Windows, whatever.) Have the script write to a file. For example, /opt/api/api.log
Set up regular log file rotation. Now you have a script that is running, in one place - just not in Splunk.

On the cluster master, create an input in one of the master apps for /opt/api/api.log (or create a new app). When you push the configurations to all the indexer peers from the master, all of them will get this input. But only one of them will actually have data in the /opt/api directory.

Alternately, you might also be able to do this on the search heads. But even if you don't do that - don't allow anything to index locally on the search heads! The best practice is to configure search heads as forwarders, so that anything they collect is sent to the indexers. I would always set this on search heads, whether they are clustered or not!
Forward search head data

dturner83
Path Finder

I did make the changes to the search heads that you recommended to not allow them to index data. This then fixed the issue of the data not being forwarded to the indexers. However each search head runs the scripted input on it's interval of 10 minutes.

I'm assuming scripted inputs don't act like scheduled searches in 6.2, where 1 head executes the input and forwards the data to index.

It does appear that if this is the case the best approach would be to use a separate box as our "data collection api" box and let it forward the data on to the indexers. I was hoping this could be done in the search heads to keep complexity down but this is a workable solution.

0 Karma

bmacias84
Champion

I would never recommend putting script which collect data on any cluster member search or indexer. Instead I would recommend setting up a Universal or Heavy Forwarder for all third-party API or inputs from a remote machine. This will prevent accidentally index data multiple times. At this point you could use the Deployment Server.

koshyk
Super Champion

@bmacias84 : Can you please elaborate on "inputs from a remote machine".
So you mean to say to install "Universal Forwarder" on same machine which has Search Head and run them parallelly?

0 Karma

bmacias84
Champion

By remote I mean collecting data from a host a system where a forwarders is not installed.

You should never install never install more than one instance of Splunk forwarder or other, unless you know what you are doing. The Universal Forwarder should be install on a server or host whose sole purpose is to collect third-party data.

0 Karma

dturner83
Path Finder

bmacias84 thank you very much. This seems to be the exact strategy as of 6.2 for scripted inputs.

0 Karma

lguinn2
Legend

True if you are going to have Splunk run the script. Of course it would be best to have a separate server to collect all the 3rd party or API inputs, using a forwarder. That's optimal, but requires yet another server. I hadn't thought of using the server that is running Deployment Server for this purpose, that's an interesting idea. Of course, I would probably have the Deployment Server, the Deployer and the License Master already running on that server...

0 Karma

frobinson_splun
Splunk Employee
Splunk Employee

Hi again, @dturner83,
Can you let me know if you are using a search head cluster? This will help us get you more specific advice.

Thanks!
@frobinson_splunk

0 Karma

dturner83
Path Finder

Yes we do use search head clustering by utilizing the deployer and clustering

0 Karma

frobinson_splun
Splunk Employee
Splunk Employee

Ok, thanks! I'll pass this along and report back with some advice ASAP!

0 Karma

dturner83
Path Finder

frobinson - Thank you for your help and input. I have to say your response times on answers.splunk.com put our Enterprise support agreement support times to shame.

0 Karma

lguinn2
Legend

It's a great community! But we will never be able to take on some of the questions that Support has to tackle - I am not on the Support team. BTW, a lot of Support folks are also top contributors on answers.splunk.com - so they are helping keep the response times low here, too!

I expect that it's really a matter of what you are willing to commit to, in writing.

frobinson_splun
Splunk Employee
Splunk Employee

Hi @dturner83
I'm a tech writer here at Splunk and I'd like to help with your question. I'm looking into this with other writers and our engineering team. I'll post an update when I find out more!

Feel free to post further questions or details here in the meantime.

Best,
@frobinson_splunk

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...