Getting Data In

Only first line from logfile is logged.

OGJ
Engager

Hi.

We are seeing weird behaviour on one of our universal forwarders. We have been sending logs from this forwarder for quite a while and this has been working properly the entire time. New logfiles are created every second hour and log lines are being appended to the newest file.

Last night the universal forwarder stopped working normally. When a new file was created the forwarder sent the first line to Splunk. New lines appended later on are not being forwarded. There are no errors logged in the splunkd.log file on the forwarder, nor any error messages on the receiving index servers. Every time a new file is generated, the forwarder sends the first line to Splunk, but the appending lines seem to be ignored.

As far as I can see, there has not been any changes on the forwarder, nor on the Splunk servers that might cause this defect.

Is there any way to debug the parsing of the logfile on the forwarder to identify the issue? Any other ideas what can be the issue here?

Thanks.

Labels (2)
0 Karma

OGJ
Engager

The issue somewhat solved itself from one day to another without doing any modifications. I have digged into the _internal-index and logfiles on the UF without getting any indication why this suddenly startet to work. I will re-post if the error re-occurs.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

another thing what you should do is check is uf read that file or not. You could do it by 

splunk list inputstatus 

on UF side.

r. Ismo 

0 Karma

m_pham
Splunk Employee
Splunk Employee

You may need to modify one or both settings below in your inputs.conf to get Splunk to ingest the appended logs. It's kind of hard to say without seeing a sample of your full log - with redacted info; or you can read the config details below and make the determination yourself. 

Can you search in index=_internal for the specific host with the search string of the log file name that you're interested in? It should show what the UF is doing when it monitors for that file path. Commonly, folks tend to use crcSalt = <SOURCE> when they have issues with Splunk not ingesting a log file.

 

crcSalt = <string>
* Use this setting to force the input to consume files that have matching CRCs,
  or cyclic redundancy checks.
    * By default, the input only performs CRC checks against the first 256
      bytes of a file. This behavior prevents the input from indexing the same
      file twice, even though you might have renamed it, as with rolling log
      files, for example. Because the CRC is based on only the first
      few lines of the file, it is possible for legitimately different files
      to have matching CRCs, particularly if they have identical headers.
* If set, <string> is added to the CRC.
* If set to the literal string "<SOURCE>" (including the angle brackets), the
  full directory path to the source file is added to the CRC. This ensures
  that each file being monitored has a unique CRC. When 'crcSalt' is invoked,
  it is usually set to <SOURCE>.
* Be cautious about using this setting with rolling log files; it could lead
  to the log file being re-indexed after it has rolled.
* In many situations, 'initCrcLength' can be used to achieve the same goals.
* Default: empty string

initCrcLength = <integer>
* How much of a file, in bytes, that the input reads before trying to
  identify whether it has already seen the file.
* You might want to adjust this if you have many files with common
  headers (comment headers, long CSV headers, etc) and recurring filenames.
* Cannot be less than 256 or more than 1048576.
* CAUTION: Improper use of this setting causes data to be re-indexed. You
  might want to consult with Splunk Support before adjusting this value - the
  default is fine for most installations.
* Default: 256 (bytes)

 

https://docs.splunk.com/Documentation/Splunk/latest/Admin/inputsconf 

Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...