Monitoring Splunk

How do we overcome this Out of Memory error for very large single line input record?

EricLloyd79
Builder

We are currently using HUNK and MapR to read in some dummy data which is basically a collection of repeating events. We are trying to test the data ingestion of MapR.
In my script, at first, I had the event divided by one newline. Then this error happened. So I added a new newline thinking that maybe a few events were being concatenated. But the error still came back.
The question is.. how do we avoid seeing this error? It says we can set mapreduce.input.linrecordreader.line.maxlength to a lower value but as far as I can see the only place that is located is in a java file. Is there an xml properties files this is located in I can change?
Anyone else had an error like this?
IOException - Out of memory error while reading a very large single line input record. To skip this record set mapreduce.input.linerecordreader.line.maxlength to a lower value. Current value: 2147483647, jvm heap size: 508035072, potential value: 31752192

0 Karma
1 Solution

EricLloyd79
Builder

This was discovered to be produced from an artifact in our rotate logs script. We recreated the perl script in Python and it doesnt happen anymore.

View solution in original post

0 Karma

EricLloyd79
Builder

This was discovered to be produced from an artifact in our rotate logs script. We recreated the perl script in Python and it doesnt happen anymore.

0 Karma
Get Updates on the Splunk Community!

Accelerating Observability as Code with the Splunk AI Assistant

We’ve seen in previous posts what Observability as Code (OaC) is and how it’s now essential for managing ...

Splunk Search APIを使えば調査過程が残せます

   このゲストブログは、JCOM株式会社の情報セキュリティ本部・専任部長である渡辺慎太郎氏によって執筆されました。 Note: This article is published in both Japanese ...

Integrating Splunk Search API and Quarto to Create Reproducible Investigation ...

 Splunk is More Than Just the Web Console For Digital Forensics and Incident Response (DFIR) practitioners, ...