Getting Data In

In selective indexing with defaultGroup=noforward, do I have to worry about adding _INDEX_AND_FORWARD_ROUTING to Splunk default inputs also?

wrangler2x
Motivator

I'm reading the section Index one input locally and then forward all inputs in Route and Filter data where selectiveIndexing=true and index=true. I have a couple of questions about that, but first here is my understanding.

Nothing gets indexed or forwarded unless you explicitly state it for each input on the system, correct? As I understand the documentation, for every input stanza in any inputs.conf file you add _INDEX_AND_FORWARD_ROUTING=local to enable indexing, and _TCP_ROUTING=myDefinedIndexer if you want that index to be forwarded to another indexer, and both if you want both.

Now the main question is -- does this apply to Splunk's internal inputs/indexes, or do I only have to worry about the inputs that I've created since I installed Splunk?

A second question I have is this: when the logs for any given input are forwarded, does the forwarded information allow the receiving indexer to know what index they should be put in, assuming both indexers have the same indexes?

And my final question: if the first indexer has filters (transforms) to drop some logs, and index others, does this behavior apply to forwarded logs? (I hope the answer to this is yes!).

For anybody wondering what I'm doing, I'm migrating to a new system and so I'm wanting to send logs from the old one for a few weeks before I switch to the new one for all my forwarders and syslog senders.

0 Karma
1 Solution

wrangler2x
Motivator

I finally figured out a better way to do this than using selective indexing/forwarding. I'm just setting default index everything and forward everything, then using blacklists to not forward what indexes I don't want forwarded, and it works well. Here is the config:

[tcpout]
defaultGroup = mynewserver_9998
disabled = false
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.blacklist = summary_.*
forwardedindex.3.blacklist = syslogs_.*
forwardedindex.4.blacklist = (bitbucket|firedalerts|fishbucket|mainframe_index|historydb|os|ossec|temp_index|unix_summary)
forwardedindex.5.blacklist = (authDb|hashDb|historydb|summarydb)
indexAndForward = true


[tcpout:mynewserver_9998]
server = 128.xxx.xxx.xxx:9998

[tcpout-server://128.xxx.xxx.xxx:9998]
sslCertPath = $SPLUNK_HOME/etc/auth/servercert.pem
sslPassword = $1$4LxTWwXEyIY=
sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem
sslVerifyServerCert = false

View solution in original post

0 Karma

wrangler2x
Motivator

I finally figured out a better way to do this than using selective indexing/forwarding. I'm just setting default index everything and forward everything, then using blacklists to not forward what indexes I don't want forwarded, and it works well. Here is the config:

[tcpout]
defaultGroup = mynewserver_9998
disabled = false
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.blacklist = summary_.*
forwardedindex.3.blacklist = syslogs_.*
forwardedindex.4.blacklist = (bitbucket|firedalerts|fishbucket|mainframe_index|historydb|os|ossec|temp_index|unix_summary)
forwardedindex.5.blacklist = (authDb|hashDb|historydb|summarydb)
indexAndForward = true


[tcpout:mynewserver_9998]
server = 128.xxx.xxx.xxx:9998

[tcpout-server://128.xxx.xxx.xxx:9998]
sslCertPath = $SPLUNK_HOME/etc/auth/servercert.pem
sslPassword = $1$4LxTWwXEyIY=
sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem
sslVerifyServerCert = false
0 Karma

woodcock
Esteemed Legend

You should click Accept on your answer to close out the question.

0 Karma

wrangler2x
Motivator

@woodcock -- Thanks for the reminder. I just did.

Masa
Splunk Employee
Splunk Employee

I see. What you wanted was not meant for 'selective indexing'. It is because what you wanted was not partially indexing and forwarding selected ones. Your way should work. Or, I believe transforms should do similar job.

0 Karma

Masa
Splunk Employee
Splunk Employee
  1. Does this apply to Splunk's internal inputs/indexes
    => Yes, but I'm not sure if _audit index works. I'm not sure if _audit will be indexed or forwarded, or not indexed at all.

  2. Does the forwarded information allow the receiving indexer to know what index they should be put in?
    => Yes, it is because processed event data contains a meta data where index should be used.

  3. And my final question: if the first indexer has filters (transforms) to drop some logs, and index others, does this behavior apply to forwarded logs?
    => No because old indexer already "parsed" events, and a new indexer(2nd indexer) will not re-parse "parsed data"
    => forwarder(probably UF)-> old indexer(filter is working)->new indexer(this will not re-filter, which was already processed in the old indexer)

Why not cloning from the forwarder to both old and new indexer?

0 Karma

wrangler2x
Motivator

Regarding the first question, it seems from the documentation that once you turn on selective indexing it won't index any external index (except where you have specified it in the inputs.conf stanza when you add _INDEX_AND_FORWARD_ROUTING=local, and this seems to imply that this would not affect internal indexes. As to forwarding in that case I don't think that is clear. I do see that if you turn off selective indexing, the default behavior for Splunk Enterprise is this:

[tcpout]
   forwardedindex.0.whitelist = .*
   forwardedindex.1.blacklist = _.*
   forwardedindex.2.whitelist = (_audit|_internal)

As is defined in /opt/splunk/etc/system/local/default/outputs.conf

On my third question, what I was really getting at is would the events I don't want get dropped before forwarding. In other words, does it forward, then filter (via transforms.conf), then index, or does it do the transforms and then index and/or forward.

After reading your answer, I went and re-read the documentation and I believe that your statement is correct, and the indexer these are forwarded to will just pop what comes in into the indexes the log entries are parsed with.

As for forwarding to new and old server, that's a big deal because I've got 150 forwarders out there with hard-coded IP address of the indexer (the old one) in the default outputs.conf, and I can't change that or override it using Deployment Monitor. This is just interim anyway, because when we are ready to switch over to the new system we are going to do these things:

  1. Bring splunk down on both
  2. rename the old system to its name -legacy
  3. take the old name of the old system and make it a CNAME to the new system's name
  4. swap the IP addresses of both systems, in the system and in DNS
  5. start splunk up again on both.

We'll drop the TTL on these ahead of time.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...