Splunk Search

Why is no data being written to the _internal index for my search-head?

jbsplunk
Splunk Employee
Splunk Employee

No logs are being written to my internal index for one of my search-heads. This started because I was looking for entries in splunkd.log, and I could see them for my indexers from the search head, but not for the search head itself. I've queried the TailingProcessor:FileStatus endpoint on the search-head's REST API and can see the files are being read, and the inputs stanza clearly lists index = _internal.

Why is my search-head not populating any events to the _internal index?

Here is some output from the SoS app's Configuration File Viewer.

inputs.conf

search     [monitor:///opt/splunk/var/log/splunk]
system     _rcvbuf = 1572864
search     disabled = false
search     followTail = 0
system     host = someserver
search     index = _internal
search     sourcetype = splunkd

relevant props entries:

system     [source::.../var/log/splunk/(license_usage).log(.\d+)?]
system     ANNOTATE_PUNCT = True
system     BREAK_ONLY_BEFORE = 
system     BREAK_ONLY_BEFORE_DATE = True
system     CHARSET = UTF-8
system     DATETIME_CONFIG = /etc/datetime.xml
system     HEADER_MODE = 
system     LEARN_SOURCETYPE = true
system     LINE_BREAKER_LOOKBEHIND = 100
system     MAX_DAYS_AGO = 2000
system     MAX_DAYS_HENCE = 2
system     MAX_DIFF_SECS_AGO = 3600
system     MAX_DIFF_SECS_HENCE = 604800
system     MAX_EVENTS = 256
system     MAX_TIMESTAMP_LOOKAHEAD = 128
system     MUST_BREAK_AFTER = 
system     MUST_NOT_BREAK_AFTER = 
system     MUST_NOT_BREAK_BEFORE = 
system     SEGMENTATION = indexing
system     SEGMENTATION-all = full
system     SEGMENTATION-inner = inner
system     SEGMENTATION-outer = outer
system     SEGMENTATION-raw = none
system     SEGMENTATION-standard = standard
system     SHOULD_LINEMERGE = True
system     TRANSFORMS = 
system     TRUNCATE = 10000
system     maxDist = 100
system     sourcetype = splunkd

search     [splunkd]
system     ANNOTATE_PUNCT = True
system     BREAK_ONLY_BEFORE = 
system     BREAK_ONLY_BEFORE_DATE = True
system     CHARSET = UTF-8
system     DATETIME_CONFIG = /etc/datetime.xml
search     EXTRACT-fields = (?i)^(?:[^ ]* ){2}(?:[+\-]\d+ )?(?P<log_level>[^ ]*)\s+(?P<component>[^ ]+) - (?P<message>.+)
system     HEADER_MODE = 
system     LEARN_SOURCETYPE = true
system     LINE_BREAKER_LOOKBEHIND = 100
system     MAX_DAYS_AGO = 2000
system     MAX_DAYS_HENCE = 2
system     MAX_DIFF_SECS_AGO = 3600
system     MAX_DIFF_SECS_HENCE = 604800
system     MAX_EVENTS = 256
system     MAX_TIMESTAMP_LOOKAHEAD = 40
system     MUST_BREAK_AFTER = 
system     MUST_NOT_BREAK_AFTER = 
system     MUST_NOT_BREAK_BEFORE = 
system     SEGMENTATION = indexing
system     SEGMENTATION-all = full
system     SEGMENTATION-inner = inner
system     SEGMENTATION-outer = outer
system     SEGMENTATION-raw = none
system     SEGMENTATION-standard = standard
system     SHOULD_LINEMERGE = True
system     TRANSFORMS = 
system     TRUNCATE = 10000
system     maxDist = 100

As you can see, no nullQueue transforms have been used.

outputs.conf:

OutputApp [tcpout:indexers]
OutputApp autoLB = true
OutputApp disabled = false
OutputApp indexAndForward = false
OutputApp server = server1:9997,server2:9997,server3:9997,server4:9997,server5:9997

OutputApp  [tcpout]
system     autoLB = true
system     autoLBFrequency = 30
system     blockOnCloning = true
system     compressed = false
system     connectionTimeout = 20
OutputApp  defaultGroup = indexers
system     disabled = false
system     dropClonedEventsOnQueueFull = 5
system     dropEventsOnQueueFull = -1
system     forwardedindex.0.whitelist = .*
system     forwardedindex.1.blacklist = _.*
system     forwardedindex.2.whitelist = _audit
system     forwardedindex.filter.disable = false
system     heartbeatFrequency = 30
system     indexAndForward = false
system     maxConnectionsPerIndexer = 2
system     maxFailuresPerInterval = 2
system     maxQueueSize = 500KB
system     readTimeout = 300
system     secsInFailureInterval = 1
system     sendCookedData = true
system     useACK = false
system     writeTimeout = 300

Has anyone seen this before? How can I correct this behavior?

Tags (3)
1 Solution

hexx
Splunk Employee
Splunk Employee

Looking at your outputs.conf, you have configured this search-head to forward any events it gathers back to your indexers:

OutputApp [tcpout:indexers]
OutputApp autoLB = true
OutputApp disabled = false
OutputApp indexAndForward = false
OutputApp server = server1:9997,server2:9997,server3:9997,server4:9997,server5:9997

This is a very good practice in general, particularly if the search-head is doing a lot of summarization jobs, as it will spread the resulting events across your indexers and you will leverage distributed search even when searching against your summary indexes.

The reason why you cannot find any events usually destined to _internal for this search-head has to do with the forwardedindex default settings in $SPLUNK_HOME/etc/system/default/outputs.conf:

OutputApp  [tcpout]
(...)
system     forwardedindex.0.whitelist = .*
system     forwardedindex.1.blacklist = _.*
system     forwardedindex.2.whitelist = _audit
system     forwardedindex.filter.disable = false
(...)

From outputs.conf.spec:

#----Index Filter Settings.
forwardedindex.<n>.whitelist = <regex>
forwardedindex.<n>.blacklist = <regex>
* These filters determine which events get forwarded, based on the indexes they belong to.
* This is an ordered list of whitelists and blacklists, which together decide if events should be forwarded to an index.
* The order is determined by <n>. <n> must start at 0 and continue with positive integers, in sequence. There cannot be any gaps in the sequence. (For example, forwardedindex.0.whitelist, forwardedindex.1.blacklist, forwardedindex.2.whitelist, ...). 
* The filters can start from either whitelist or blacklist. They are tested from forwardedindex.0 to forwardedindex.<max>.
* You should not normally need to change these filters from their default settings in $SPLUNK_HOME/system/default/outputs.conf.

forwardedindex.filter.disable = [true|false]
* If true, disables index filtering. Events for all indexes are then forwarded.
* Defaults to false.

This explanation is actually somewhat incomplete (don't worry, we're getting the spec file fixed in a future release): When events destined to a given index are filtered away from forwarding by a forwardedindex directive they are neither forwarded nor indexed!.

This is why you cannot find any _internal events recorded by your search-head anywhere.

To correct this, add the following configuration to $SPLUNK_HOME/etc/system/local/outputs.conf:

[tcpout]
forwardedindex.3.whitelist = _internal

Note that we are considering to change this default behavior in the future and whitelist _internal for forwarding just like _audit is today.

View solution in original post

hexx
Splunk Employee
Splunk Employee

Looking at your outputs.conf, you have configured this search-head to forward any events it gathers back to your indexers:

OutputApp [tcpout:indexers]
OutputApp autoLB = true
OutputApp disabled = false
OutputApp indexAndForward = false
OutputApp server = server1:9997,server2:9997,server3:9997,server4:9997,server5:9997

This is a very good practice in general, particularly if the search-head is doing a lot of summarization jobs, as it will spread the resulting events across your indexers and you will leverage distributed search even when searching against your summary indexes.

The reason why you cannot find any events usually destined to _internal for this search-head has to do with the forwardedindex default settings in $SPLUNK_HOME/etc/system/default/outputs.conf:

OutputApp  [tcpout]
(...)
system     forwardedindex.0.whitelist = .*
system     forwardedindex.1.blacklist = _.*
system     forwardedindex.2.whitelist = _audit
system     forwardedindex.filter.disable = false
(...)

From outputs.conf.spec:

#----Index Filter Settings.
forwardedindex.<n>.whitelist = <regex>
forwardedindex.<n>.blacklist = <regex>
* These filters determine which events get forwarded, based on the indexes they belong to.
* This is an ordered list of whitelists and blacklists, which together decide if events should be forwarded to an index.
* The order is determined by <n>. <n> must start at 0 and continue with positive integers, in sequence. There cannot be any gaps in the sequence. (For example, forwardedindex.0.whitelist, forwardedindex.1.blacklist, forwardedindex.2.whitelist, ...). 
* The filters can start from either whitelist or blacklist. They are tested from forwardedindex.0 to forwardedindex.<max>.
* You should not normally need to change these filters from their default settings in $SPLUNK_HOME/system/default/outputs.conf.

forwardedindex.filter.disable = [true|false]
* If true, disables index filtering. Events for all indexes are then forwarded.
* Defaults to false.

This explanation is actually somewhat incomplete (don't worry, we're getting the spec file fixed in a future release): When events destined to a given index are filtered away from forwarding by a forwardedindex directive they are neither forwarded nor indexed!.

This is why you cannot find any _internal events recorded by your search-head anywhere.

To correct this, add the following configuration to $SPLUNK_HOME/etc/system/local/outputs.conf:

[tcpout]
forwardedindex.3.whitelist = _internal

Note that we are considering to change this default behavior in the future and whitelist _internal for forwarding just like _audit is today.

jbsplunk
Splunk Employee
Splunk Employee

Thanks for the information, this resolved the issue and I see the _internal index is now being forwarded after adding a line to tcpout whitelisting _internal.

Ayn
Legend

By default, data from internal indexes will not be forwarded. In 4.1 and later, you can control this by setting the parameter forwardedindex.filter.disable to true. If you only want to enable forwarding for specific internal indexes, you can also use the blacklists and whitelists directives available in outputs.conf instead.

More information available here:
http://splunk-base.splunk.com/answers/2737/how-can-i-forward-the-internal-splunk-logs-of-a-splunk-de...
http://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf

EDIT: On closer inspection, I see that you've already applied the blacklists and whitelists, so I was a bit too hasty with my response. The only advice I could give you is to try disabling the filter completely and see what you get, then work your way from there.

Get Updates on the Splunk Community!

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...