Getting Data In

Licensing window alerts on my indexer caused splunk stop working ?

seema2502
Explorer

Hi Team,

My splunk stopped indexing is it cause of i am having 12 permanent licensing warning in my indexer but all are older then 3 months, only 1 warning we received on 5th Sep 2014 one week ago, after this warning Sep 5, 2014 12:00:00 AM (1 week ago) "Indexing quota exceeded for this pool, poolsz=524288000 bytes" my splunk stopped working. Please find the below Licensing window alerts(12) which i am receiving on indexer.

Time                    Message                                                                                       Indexer   Pool    Stack                   Category
9/5/2014 0:00   Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (1 week ago)                  
5/28/2014 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (3 months ago)                    
11/20/2013 0:00 Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (10 months ago)                   
10/1/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (11 months ago)                   
9/29/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (11 months ago)                   
9/28/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (11 months ago)                   
9/26/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (11 months ago)                   
9/22/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (12 months ago)                   
9/21/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (12 months ago)                   
9/20/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (12 months ago)                   
9/19/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window    (12 months ago)                   
9/18/2013 0:00  Indexing quota exceeded for this pool, poolsz=524288000 bytes   xyz auto_generated_pool_free    free    license_window     (12 months ago)                  

please suggest.

Thanks,
Seema

Tags (2)
0 Karma

seema2502
Explorer

yes, we are looking at an event logged by a forwarder that was forwarded to the indexer but there is no evidence from forwarder logs that it was forwarded as per metrics.log.
yes outputs.conf stanzas exist for that indexer.
when we tried ./bin/splunk cmd btool outputs list, found
[tcpout-server://indexer:port]
[tcpout:indexer_port]
server = indexer:port

when we started splunk in DEBUG mode found below splunkd.logs :-

09-15-2014 07:30:56.856 DEBUG TailingProcessor - Deferred notification for path='monitored file path'.
09-15-2014 07:30:56.856 DEBUG BatchReader - inflight=var/log/splunk/metrics.log.1 state=monitored file path rc=false
09-15-2014 07:30:56.856 DEBUG TailingProcessor - Will attempt to read file: monitored file path from existing fd.
09-15-2014 07:30:56.857 DEBUG TailingProcessor - About to read data (Reusing existing fd for file='monitored file path').
09-15-2014 07:30:56.857 DEBUG WatchedFile - seeking monitored file path to off=40984
09-15-2014 07:30:56.857 DEBUG WatchedFile - Reached EOF: /monitored file path (key= sptr=40984 scrc= fnamecrc= modtime=1410761758)
09-15-2014 07:30:57.863 DEBUG UTF8Processor - Done key received for: source::monitored file path::
09-15-2014 07:30:57.930 INFO AggregatorMiningProcessor - Setting up line merging apparatus for: source::monitored file path::
09-15-2014 07:30:57.930 INFO AggregatorMiningProcessor - Got done message for: source::monitored file path host::
09-16-2014 14:08:23.557 +0100 INFO WatchedFile - Checksum for seekptr didn't match, will re-read entire file='monitored file path'.

0 Karma

seema2502
Explorer

Hi Martin, we have 26(FULL type forwarder) which is forwarding data to 34(Indexer) now can you please explain once again as the context was not clear to me.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Turn off forwarding on the indexer if there is no known reason for having it turned on.
Enable whatever is receiving the events forwarded by the indexer if there is a reason for having forwarding turned on on the indexer.

0 Karma

seema2502
Explorer

We are in support team, we are not sure why deployment team have configured splunk in such a manner.
forwarder server is full forwarder and before 5th Sep it was working fine since so many months.
can yo please suggest is there any way to fix this issue.
Thanks

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Why is the indexer forwarding its data to another indexer?

There could be good reasons for that, but the majority of Splunk installations does not do that so make sure you actually want your indexer to forward data.

0 Karma

seema2502
Explorer

Hi,

Last date when data got indexed was on Sep 7, 2014 09:00:00 AM. After that indexing stopped.

As per splunkd.log of our indexer,

  1. "INFO TcpInputProc - Stopping IPv4 port" - logged twice(14:18 and 15:14) for the ports 30534, 30544, 30554 (hourly once) in splunkd.log on 5th Sep.
  2. "INFO TcpInputProc - Stopping IPv4 port" - logged zero times on 6th Sep 2014 ( Saturday )
  3. "INFO TcpInputProc - Stopping IPv4 port" - logged 30 times on 7th Sep 2014 ( Sunday ) - ie once for each port at 23:26
  4. "WARN TcpInputProc - Stopping all listening ports. Queues blocked for more than 300 seconds" - logged once only on 7th Sep 2014 ( Sunday ) at 23:26
  5. "ERROR TcpOutputFd - Connection to host= failed" was logged from 09-07-2014 23:26 to 09-07-2014 23:57 for 15 times. Didn't see this error on 5th or 6th Sep 2014.

On searching for blocked logs in metrics.log of our indexer,

  1. Nil on 5th Sep, 6th Sep. Sep 7th 2:40 to Sep 8 15:04 --> 3459 times blocked=true
  2. On further analysing, indexqueue, splunktcpin, typingqueue and aggqueue were found to be blocked as follows :

    09-07-2014 18:44:03.649 +0100 INFO Metrics - group=queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=2493, largest_size=2504, smallest_size=1206
    09-07-2014 21:08:44.626 +0100 INFO Metrics - group=queue, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=2508, largest_size=2517, smallest_size=323
    ....
    09-07-2014 23:59:21.983 +0100 INFO Metrics - group=queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=2310, largest_size=2310, smallest_size=2310
    09-07-2014 21:08:44.626 +0100 INFO Metrics - group=queue, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=2508, largest_size=2517, smallest_size=323
    ....
    09-08-2014 00:06:36.011 +0100 INFO Metrics - group=queue, name=typingqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1432, largest_size=1432, smallest_size=1410
    09-08-2014 01:51:55.416 +0100 INFO Metrics - group=queue, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=2984, largest_size=2984, smallest_size=2952

Please suggest.
Thanks,
Seema

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Could it be that you're looking at an event logged by a forwarder that was forwarded to the indexer?

If that's really in splunkd.log on an indexer then check if any outputs.conf stanzas exist for that indexer.

./bin/splunk cmd btool outputs list

If that's in splunkd.log on a forwarder then check if receiving is enabled on the indexer.

./bin/splunk cmd btool inputs list splunktcp
0 Karma

seema2502
Explorer

09-07-2014 23:57:56.771 +0100 ERROR TcpOutputFd - Connection to host= failed
here IP address is that of our indexer and port is the port configured for forwarding data from forwarder to indexer.
what can be a cause for this error, is this the port connectivity issue?
when we check for telnet port -->it fails

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Then I'm confused by the indexer complaining about a connection failing in TcpOutputFd.

0 Karma

seema2502
Explorer

No Martin.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Is your indexer forwarding data to some other system?

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

License violations do not stop indexing, even current ones don't. What would stop with too many violations in one month is searching your non-internal data.

There must be a different reason for your indexer to stop, look around the _internal index or, if not indexed, the logfiles in $SPLUNK_HOME/var/log/splunk, for clues around the time you last had data indexed.

0 Karma
Register for .conf21 Now! Go Vegas or Go Virtual!

How will you .conf21? You decide! Go in-person in Las Vegas, 10/18-10/21, or go online with .conf21 Virtual, 10/19-10/20.