Getting Data In

Why is my log file sometimes ignored?

twinspop
Influencer

Self-answered question follows. Perhaps it will help someone else in the same boat.

I have a file called portal-server.log on a log server (NFS mount from many machines) that periodically doesn't log after a roll. The internal logs show:

09-30-2016 18:26:33.435 -0400 ERROR TailingProcessor - File will not be read, seekptr checksum did not match (file=/var/logs/host1048/portal-server.log).  Last time we saw this initcrc, filename was different.  You may wish to use a CRC salt on this source.  Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.

I tried changing the initCrcLength but problem returned. (And I steered clear of using CRCSalt.) Checking the number of files on the log server. Checking the health of the NFS mount. So many avenues all leading to dead ends.

What is going on? Answer below...

0 Karma
1 Solution

twinspop
Influencer

I asked the user for the first few lines of the files thinking maybe there was a header that an initCrcLength adjustment would fix. No, it was plain old syslog:

2016-09-30 00:00:00,836 WARN  - [APPID: ] [TXID: ] [UID: ] [ORGOID: ] [AOID: ] [UA_MODE: ] - com.cs.services.ws.handlers.somehandler.handleMessage(): HTTP Header OrgOID is NOT present in the header

Then it hit me. I quickly searched for that exact log line:

index=problem_index earliest=@d latest=@d+1m "2016-09-30 00:00:00,836 WARN" orgoid is not present

And there it was. But in another source! The error from internal logs was legit. There was another log with IDENTICAL content. Turns out someone on the developer team added another appender to the log4j config.

View solution in original post

0 Karma

twinspop
Influencer

I asked the user for the first few lines of the files thinking maybe there was a header that an initCrcLength adjustment would fix. No, it was plain old syslog:

2016-09-30 00:00:00,836 WARN  - [APPID: ] [TXID: ] [UID: ] [ORGOID: ] [AOID: ] [UA_MODE: ] - com.cs.services.ws.handlers.somehandler.handleMessage(): HTTP Header OrgOID is NOT present in the header

Then it hit me. I quickly searched for that exact log line:

index=problem_index earliest=@d latest=@d+1m "2016-09-30 00:00:00,836 WARN" orgoid is not present

And there it was. But in another source! The error from internal logs was legit. There was another log with IDENTICAL content. Turns out someone on the developer team added another appender to the log4j config.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...