- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Deployment server functionality automation ?
Hi Team,
We would like to know from Splunk , if there are any near-term plans to enhance the splunk procedure of onboarding logs.
Currently we have to edit the inputs.conf and serverclass.conf everytime we enroll log into Splunk.
Just wanted to check if Splunk has any plans to automate this? OR if it has any plans to provide a RESTful interface for this service.
Appreciate any inputs on this.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Intersting article at Adding a Deployment Server / Forwarder Management to a new or existing Splunk Cloud (or Splunk Enter...
It says -
-- 3) Using Automation ( Puppet / Chef / Ansible etc) – Be careful when using these in conjunction with DS.. configs can disappear and break…
We started to work on automation via Ansible and a lot can be done. For example, all our Splunk upgrades are done by Ansible but we haven't tried to supplement the DS functionality yet even though we can produce the serverclass.conf artifacts via Ansible.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Good call! Building on that, a cool conf talk that is about that: http://conf.splunk.com/sessions/2016-sessions.html#search=Ansible&
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

You may also want to have a look at Is anyone using CI/CD to deploy Splunk apps? And the Appetite github repo that's another way you could use to further automate configuration....(in addition to the deployment server)
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


That's a super interesting question. I can't speak for the product team and the roadmap - even if I could, I am doubtful they'd put any commitment in writing here 😉
The wrinkle here is that many deployments actually do want to have control over data ingestion so as not to compromise their license. Imagine if one of your users added a datasource that turned out to be sending a ton of data into your Splunk environment resulting in license violations - you wouldn't know until it was a problem. Another challenge is that some data is harder for Splunk to auto-determine things like event breaks and time stamps. Therefore, many admins prefer to define those details before data ingest begins so as to make the indexers perform as effectively as possible.
That said, the REST Api is pretty awesome and can be used for doing a lot. You could probably create a new monitor input on the forwarder using the REST API (http://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTinput#data.2Finputs.2Fmonitor) but then it wouldn't be managed by the deployment server. Alternatively, you could use the REST API to create a serverclass on the deployment server (http://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTdeploy#deployment.2Fserver.2Fservercl...) and have it reference an app that your solution automatically created and staged on the same deployment server.
Lastly, remember that the configuration in this context is all flat files and therefore could be generated with a shell script on the deployment server.
But other than that, there is nothing currently available to automate that for you.
