Getting Data In

Why is my log file sometimes ignored?

twinspop
Influencer

Self-answered question follows. Perhaps it will help someone else in the same boat.

I have a file called portal-server.log on a log server (NFS mount from many machines) that periodically doesn't log after a roll. The internal logs show:

09-30-2016 18:26:33.435 -0400 ERROR TailingProcessor - File will not be read, seekptr checksum did not match (file=/var/logs/host1048/portal-server.log).  Last time we saw this initcrc, filename was different.  You may wish to use a CRC salt on this source.  Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.

I tried changing the initCrcLength but problem returned. (And I steered clear of using CRCSalt.) Checking the number of files on the log server. Checking the health of the NFS mount. So many avenues all leading to dead ends.

What is going on? Answer below...

0 Karma
1 Solution

twinspop
Influencer

I asked the user for the first few lines of the files thinking maybe there was a header that an initCrcLength adjustment would fix. No, it was plain old syslog:

2016-09-30 00:00:00,836 WARN  - [APPID: ] [TXID: ] [UID: ] [ORGOID: ] [AOID: ] [UA_MODE: ] - com.cs.services.ws.handlers.somehandler.handleMessage(): HTTP Header OrgOID is NOT present in the header

Then it hit me. I quickly searched for that exact log line:

index=problem_index earliest=@d latest=@d+1m "2016-09-30 00:00:00,836 WARN" orgoid is not present

And there it was. But in another source! The error from internal logs was legit. There was another log with IDENTICAL content. Turns out someone on the developer team added another appender to the log4j config.

View solution in original post

0 Karma

twinspop
Influencer

I asked the user for the first few lines of the files thinking maybe there was a header that an initCrcLength adjustment would fix. No, it was plain old syslog:

2016-09-30 00:00:00,836 WARN  - [APPID: ] [TXID: ] [UID: ] [ORGOID: ] [AOID: ] [UA_MODE: ] - com.cs.services.ws.handlers.somehandler.handleMessage(): HTTP Header OrgOID is NOT present in the header

Then it hit me. I quickly searched for that exact log line:

index=problem_index earliest=@d latest=@d+1m "2016-09-30 00:00:00,836 WARN" orgoid is not present

And there it was. But in another source! The error from internal logs was legit. There was another log with IDENTICAL content. Turns out someone on the developer team added another appender to the log4j config.

0 Karma
Get Updates on the Splunk Community!

Improve Your Security Posture

Watch NowImprove Your Security PostureCustomers are at the center of everything we do at Splunk and security ...

Maximize the Value from Microsoft Defender with Splunk

 Watch NowJoin Splunk and Sens Consulting for this Security Edition Tech TalkWho should attend:  Security ...

This Week's Community Digest - Splunk Community Happenings [6.27.22]

Get the latest news and updates from the Splunk Community here! News From Splunk Answers ✍️ Splunk Answers is ...