Getting Data In

How to configure splunk to forward search head cluster data to indexer layer and verify its working correctly?

transtrophe
Communicator

I did have a previous post - "How to get search head cluster members to forward internal data to indexer cluster? - but don't think it is working correctly - yet.

I am a bit confused by one item in the "Best practice: Forward search head data to the indexer layer (http://docs.splunk.com/Documentation/Splunk/6.2.2/DistSearch/Forwardsearchheaddata) documentation step 1. which states:

  1. Make sure that all necessary indexes exist on the indexers. For example, the S.o.S app uses a scripted input that puts data into a custom index. If you install S.o.S on the search head, you need to also install the S.o.S Add-on on the indexers, to provide the indexers with the necessary index settings for the data the app generates. On the other hand, since _audit and _internal exist on indexers as well as search heads, you do not need to create separate versions of those indexes to hold the corresponding search head data.

Now I do have S.O.S. configured (and running) on each of my search head cluster members, so do I also need to have S.O.S. installed on the indexers if what I want to have pushed down to the indexer layer from the search head is the _audit and _internal data?

On the search head cluster member's outputs.conf (they have the same outputs.conf) I have the following in the

[tcpout]
maxQueueSize = auto
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.whitelist = (_audit|_internal|_introspection)
forwardedindex.filter.disable = true
indexAndForward = false
autoLBFrequency = 30
blockOnCloning = true
compressed = false
disabled = false
dropClonedEventsOnQueueFull = 5
dropEventsOnQueueFull = -1
heartbeatFrequency = 30
maxFailuresPerInterval = 2
secsInFailureInterval = 1
maxConnectionsPerIndexer = 2
forceTimebasedAutoLB = false
sendCookedData = true
connectionTimeout = 20
readTimeout = 300
writeTimeout = 300
useACK = false
blockWarnThreshold = 100
sslQuietShutdown = false
defaultGroup = transtrophe_search_peers

[syslog]
type = udp
priority = <13>
dropEventsOnQueueFull = -1
maxEventSize = 1024

[indexAndForward]
index = false

[tcpout:transtrophe_search_peers]
server=ip-172-31-20-173:9997,ip-172-31-18-186:9997,ip-172-31-22-253:9997,ip-172-31-26-200:9997,ip-172-31-20-120:9997
autoLB = true 
0 Karma

skalliger
Motivator

Now I do have S.O.S. configured (and running) on each of my search head cluster members, so do I also need to have S.O.S. installed on the indexers if what I want to have pushed down to the indexer layer from the search head is the _audit and _internal data?#

You don't necessarily need to install the S.O.S. app on the indexers as well. You could just configure the index definition on your indexers yourself. An easier way, of course, would be to deploy the S.O.S. app from the master to the indexers.

For Splunk data itself, there are no additional actions required besides modifying your outputs.conf to forward data to your indexers.
But this

forwardedindex.2.whitelist = (_audit|_internal|_introspection)

will actually only send data from those three indexes.

As a side note, we prefer weighted loadbalancing (I write this because I saw autoLBFrequency = 30 in your outputs.conf, so I assume you're not using weighted LB. Still, this value takes effect with weighted LB). You got quite many settings in your tcpout stanza.

Skalli

Edit: Damn, that other guy tricked me. It's an old thread.

0 Karma

nit123
Path Finder

Please follow steps 1 to 3 to configure a search head reading events on an indexer sent to by a forwarder and having a seach head look through the data on indexer brought by the forwarder.

1) Setup a Forwarder
Before a forwarder can forward data, it must have a configuration. A configuration:
Tells the forwarder what data to send and where to send the data.
To enable forwarding, navigate to Settings -> Forwarding & Receiving -> Configure Forwarding -> New & set IP address of the splunk instance to forward data to.

2) Setup a Indexer
All full Splunk Enterprise instances serve as indexers by default. To learn how to install a Splunk Enterprise instance.
The indexer is the Splunk Enterprise component that creates and manages indexes. The primary functions of an indexer are:
a) Indexing incoming data.
b) Searching the indexed data.
In single-machine deployments consisting of just one Splunk Enterprise instance, the indexer also handles the data input and search management functions.
To forward remote data to an indexer, you use forwarders, which are Splunk Enterprise instances that receive data inputs and then consolidate and send the data to a Splunk Enterprise indexer.
To enable receiver at Indexer, Navigate to Settings -> Forwarding & Receiving ->Configure Receiving -> New & add IP address of splunk stance that will forward data.

3) Steps to setup a Search Head

You can install one or more search heads to handle your distributed search needs. Search heads are just full Splunk Enterprise instances that have been specially configured.
You can setup search head either from Splunk web interface or using the command line as follows.
Enable search peers in search heads by navigating to Settings -> Distributed Search -> Search peers - > New & add indexer IP Address to talk to. Make sure to have the unique server name for each member of the cluster. User can do it in two ways as below:
1) From Splunk GUI under Settings -> Server settings -> General Settings update the field "Splunk server name".
2) Edit the field "serverName" in the /etc/system/local/server.conf file and then restart the Splunk.

Hope this helps !

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...