I am trying to do a distributed deployment (multiple search heads and indexers) of the EMC Isilon App and Add-on for Splunk Enterprise and the instructions call for setting it up via Splunk Web.
Can you please provide details on what files to store the Isilon credentials in so I can configure it via the deployment server by editing files directly?
Also, can you provide a sample syslog.conf file instead of that weirdly formatted section in the instructions?
I’d like to be able to point the Isilon at the forwarder using a nonstandard port as well.
Can you provide details on configuring the Isilon syslog output to go to another port instead of just 514? My heavy forwarder is configured with three other ports for syslog data to classify different sources to specific indexes or sourcetypes.
The documentation says to setup the Isilon credentials via Splunk Web I but I'd like to just edit the settings directly on the deployment server and push things as needed to the correct nodes.
Can you please provide instructions on what file to update with credentials?
Additionally, can you please provide information on using a non standard syslog port when configuring syslog setting on the Isilon? (I'd like to have my forwarder pick up logs on a high port dedicated to Isilon logs, so I can parse multiple syslog message sources on one system with different input stanzas.
To answer your questions:
For the above approach you might need to make splunk.secret same on both deployment server and forwarder(/splunk/etc/auth/splunk.secret).
allow 514 port to receive data
firewall-cmd --zone=public --add-port=514/udp --permanent
Let me know if you need any further help.
I have a heavy forwarder that collects REST data and a load-balanced set of heavy forwarders that collect syslog. These servers also collect other data; they are not dedicated to Isilon. I have a search head cluster that hosts the App and a multi-site indexer cluster.
The heavy forwarder that collects REST data from the Isilon cluster node has the Add-on installed with the isilon_setup.py and macros.conf changes described below. Delete default/inputs.conf, default/eventgen.conf, and the samples/ directory.
The heavy forwarders that collect syslog have no Isilon app components installed. My general-purpose syslog collection is configured so that Isilon sources are source-typed as emc:isilon:syslog. You will have to set up something like what Crest supplied with their default/inputs.conf.
The indexers have no Isilon app components installed. Obviously they have indexes configured, one of which is for Isilon events.
The search head cluster has the Add-on and App installed with the distsearch.conf and macros.conf changes. Delete default/inputs.conf, default/eventgen.conf, and the bin/ and samples/ directories.
Create local/distsearch.conf containing:
[replicationSettings:refineConf] replicate.macros = true
Why?: Search heads do not normally put macros into the bundles that they forward to their indexers. This changes that behavior. Don't bother putting the macro directly on your indexers; it has to come from your search tier as part of a search bundle. The locally defined macro is only used when you run the search on the indexer itself, like from the indexer's UI.
Create local/macros.conf containing:
[isilon_index] definition = index=thenameofyourindex iseval = 0
Why?: The 'isilon_index' macro is not defined in default/macros.conf (it should be IMO) and your searches will fail with errors because the macro is not defined. This does not only affect Isilon-related searches, but other searches that use certain tags and/or event types. Change the index name to match whatever you want to use as an index. The default name is 'isilon'. That's what the app sets if you do not provide an index name on the setup screen. That's also the index you are stuck with if you run setup again for the same node with a new index name. You must edit local/inputs.conf directly if you want to change the index after initial configuration – the setup screen will ignore an index name if one is already set (tested with v2.3).
Remove (comment) lines 145-148 of bin/isilon_setup.py:
#indexes = en.getEntities(['data', 'indexes'], count=-1, sessionKey=sessionKey) #if not index in indexes.keys(): # logger.error("EMC Isilon Error: index %s does not exist" % index) # raise Exception("EMC Isilon Error: index %s does not exist" % index)
Why?: The app insists that you have a local index on the heavy forwarder that you are using for collection. Why? Because the TA was written with a single-instance Splunk environment in mind, I guess. Crest is not unique in writing code that assumes that the index is defined on the local Splunk instance even if it doesn't make any sense. Splunk even does it. Mind-boggling. This code will cause the setup to fail with an error. It doesn't bother mentioning that it is failing because an index that you do not need does not exist, which would be helpful and not just frustrating. Of course, make sure the index exists on the indexer tier where REST data is heading.
My distsearch.conf change worked for a little while and then stopped (Splunk 7.1.2). I don't know why.
My current solution is to remove all occurrences of "
isilon_index" in eventtypes.conf. The easiest way to proceed may be to copy default/eventtypes.conf into local/eventtypes.conf and remove just the macro from any lines that contain it.
yes I am deploying both, but the app has proper instructions for distributed deployment, the add-on seems to be lacking in a few details for me to be able to get it up and running.
We were able to get the App and Add-on installed in our Search Head Cluster. We installed the add-on on one of our heavy forwarders, but have been unable to get the Set-up script to run properly. It errors out with the message:
Error while posting to url=/servicesNS/nobody/TA_EMC-Isilon/isiloncustom/isilonendpoint/setupentity
Can the Crest Data Systems team provide an example of a "manual set-up" of the app, or a description of what config files the Set-Up script creates/modifies?