Getting Data In

Definitions of the route keys and queueNames for splunktcp input stanza in inputs.conf?

Builder

Does anyone have definitions of the route keys and queueNames for splunktcp input stanza?

Note that I do not want to customise this! I have inherited an environment where the previous admins have hardcoded a setting to override the default, and I need to understand what it all does before I can revert it!

The docs note that the setting exists, but not the specifics of what can be set there (http://docs.splunk.com/Documentation/Splunk/6.5.3/Admin/Inputsconf)

[splunktcp]
route = [has_key|absent_key:<key>:<queueName>;...]
* Settings for the light forwarder.
* The receiver sets these parameters automatically -- you DO NOT need to set
  them.
* The property route is composed of rules delimited by ';' (semicolon).
* The receiver checks each incoming data payload via cooked tcp port against
  the route rules.
* If a matching rule is found, the receiver sends the payload to the specified
  <queueName>.
* If no matching rule is found, the receiver sends the payload to the default
  queue specified by any queue= for this stanza. If no queue= key is set in
  the stanza or globally, the events will be sent to the parsingQueue.

FYI the default is:

route = has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:indexQueue;absent_key:_linebreaker:parsingQueue

Cheers!
Glenn

0 Karma
1 Solution

Splunk Employee
Splunk Employee

Hi Glenn

For a overview of queueNames see http://wiki.splunk.com/Community:HowIndexingWorks
The _key values are not documented but the names are in the logical. The only odd one is _dstrx (next destination is regex processor)

These values do not normally need to be changed but I have seen people change "_linebreaker:indexQueue" to _linebreaker:parsingQueue to force data that has been parsed on a HWF to be reparsed on an Indexer so it hits props / transforms again. Normally parsed data would go straight to indexQueue

HTH
Shaky

View solution in original post

Splunk Employee
Splunk Employee

Hi Glenn

For a overview of queueNames see http://wiki.splunk.com/Community:HowIndexingWorks
The _key values are not documented but the names are in the logical. The only odd one is _dstrx (next destination is regex processor)

These values do not normally need to be changed but I have seen people change "_linebreaker:indexQueue" to _linebreaker:parsingQueue to force data that has been parsed on a HWF to be reparsed on an Indexer so it hits props / transforms again. Normally parsed data would go straight to indexQueue

HTH
Shaky

View solution in original post

Path Finder

Hi @dshakespeare_splunk

this topic is quite old, but I hope you'll be notified to help me out with my additional question.

Let's say I forward from one forwarder to another data. The first one performs the parsing but only the second one can perform the routing to the indexers. (Network, Security)

I'd need to restart the "typingQueue" to enable the second forwarder to perform the routing correct?

Would that mean on the second forwarder, I'd have to configure "has_key:_linebreaker:typingQueue" as linebreaker is the only indication that the data is already cooked?

Or is there a different method to do such?

Thank you very much in advance.

Kind regards
Marco

0 Karma

Influencer

See this answer for links to the various documentation: https://answers.splunk.com/answers/83334/what-are-the-various-queues-in-splunk.html

Also support may be able to help you redesign any queue customizations.

0 Karma

Esteemed Legend

Any comments in the file or an author name (maybe in the app.conf file of the app that contains this file)?

0 Karma

Builder

It's not commented in the file or the system spec. From what I can tell it was implemented based on this Splunk Answer: https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

I actually know what it was intended for (same reason as the Splunk Answer above was asked, to force reparsing of cooked data on indexers after receiving from HF layer). However, the defaults have changed since 2013 when this was devised, and the hardcoded value is now missing some new queues like replicationQueue that may actually be required. I'd like to know what they all do so I can redesign it from base principles.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!