Monitoring Splunk

What does "Events may not be returned in sub-second order due to memory pressure." mean?

jtrucks
Splunk Employee
Splunk Employee

What does "Events may not be returned in sub-second order due to memory pressure." mean?

--
Jesse Trucks
Minister of Magic
Tags (2)
1 Solution

hexx
Splunk Employee
Splunk Employee

This is a message indicating that we are throttling the rate at which a search process reads event raw data (the _raw field) to keep the memory usage of the search process low.
We've had issues of high memory usage with searches encountering patches of large (~10KB or more) _raw fields, so when that happens, we slow down the rate at which we read those events.

That being said, we also discovered that:

  • This message is not informative enough
  • The default threshold is too low

We are adjusting both of those things in a future release.

In the meantime, if you have enough memory available that you feel comfortable allowing searches to use more of it, you can increase this threshold by changing the value of max_rawsize_perchunk in limits.conf:

max_rawsize_perchunk = <integer>
* Maximum raw size of results per call to search (in dispatch).
* 0 = no limit.                   
* Defaults to 100000000 (100MB)
* Not affected by chunk_multiplier

View solution in original post

saravanan90
Contributor

max_rawsize_perchunk =
* The maximum raw size, in bytes, of results for each call to search
(in dispatch).
* When set to "0": Specifies that there is no size limit.
* This setting is not affected by the "chunk_multiplier" setting.
* Default: 100000000 (100MB)
It is in stanza search in limits.conf

[search]
max_rawsize_perchunk=0

0 Karma

smileyge
Path Finder

Splunk 6.1.1, I ran across this for the first time when I indexed a new source type with larger events than my other source types. I have some events that are many Kilobytes in this source type, and there aren't that many. Increasing the max_rawsize_perchunk resolved it and doesn't appear to have caused any change in the memory load that I can tell.

0 Karma

smileyge
Path Finder

What do you consider a high value? I did default * 5.

0 Karma

the_wolverine
Champion

We were advised to set a high value for max_rawsize_perchunk by Splunk. However in testing, it caused Safari to hang, so we could not implement that suggestion.

0 Karma

the_wolverine
Champion

In my own testing I'm suspecting that this is due to large numbers of events with the exact same timestamp -- possibly caused by un-timestamped events that are being timestamped by Splunk as they are indexed. This was not an issue in version 4.3.x .. it just started occurring in version 5.0.3. Setting the max_rawsize_perchunk may alleviate the symptom but is not addressing the issue.

0 Karma

hexx
Splunk Employee
Splunk Employee

This is a message indicating that we are throttling the rate at which a search process reads event raw data (the _raw field) to keep the memory usage of the search process low.
We've had issues of high memory usage with searches encountering patches of large (~10KB or more) _raw fields, so when that happens, we slow down the rate at which we read those events.

That being said, we also discovered that:

  • This message is not informative enough
  • The default threshold is too low

We are adjusting both of those things in a future release.

In the meantime, if you have enough memory available that you feel comfortable allowing searches to use more of it, you can increase this threshold by changing the value of max_rawsize_perchunk in limits.conf:

max_rawsize_perchunk = <integer>
* Maximum raw size of results per call to search (in dispatch).
* 0 = no limit.                   
* Defaults to 100000000 (100MB)
* Not affected by chunk_multiplier

adamsmith47
Communicator

We have an app which is dumping approximately 100,000 events at a time into the index, all with the same timestamp, and each event is quite large, many with hundreds of lines. When I would receive the above error, the search would perform very slowly, taking 30+ minutes to finish. I started slowly increasing the size of max_rawsize_perchunk, without any change, and then jumped up to default*10, putting it to 1,000,000,000. The error disappeared, the search completed in 3 minutes, and there was minimal noticeable memory and CPU usage on the search head/indexer (the same machine).

Default*10 worked well for me, so far.

fabiob
Explorer

hi hexx, how did you slow down the rate at which events are read?
if i understood correctly, with "max_rawsize_perchunk" you basically increase the memory that the search process can use, but how do you tune the read rate? thanks!

0 Karma

jrubio1
New Member

After changing the limits.conf file do I need to restart a service?

0 Karma

rturk
Builder

Hi jtrucks,

This article is a bit old (2010) but it's high-level enough to give you an idea of what might be happening.

http://blogs.splunk.com/2010/02/03/splunk-memory-use-patterns/

What is the memory usage on your Splunk server? Is it regularly paging to virtual memory? From the description of the error, Splunk may determine that it's running short on memory, so will attempt to conserve memory by parsing time-stamps only down to the second when running a search. This may not actually affect you (depending on the granularity of the timestamps in your events), but is definitely indicative of an issue that needs investigation.

Cheers 🙂

RT

EDIT: There's also this article on Memory Pressure - same recommendation (more RAM).

splunk_one
New Member

Hi jtrucks, have you made a tuning of this parameter? Which value is ok for your configuration? Thanks

0 Karma

jtrucks
Splunk Employee
Splunk Employee

Actually, it's barely touching the 48G of RAM in the box. I suspect it's the per-search configured limits (which are default), and the search is returning hundreds of thousands or millions of results when I get this error.

--
Jesse Trucks
Minister of Magic
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...