Splunk Search

What causes Maximum Disk Usage error?

rajkskumar
Explorer

We are working on/ developing 4-5 Dashboards with around 10 Charts in each Dashboard. When we work on multiple Dashboards,  we frequently face the Maximum Disk Usage error. And, there are also 10+ Alerts running every day and the Search results are configured to be stored for 24 hours.

I am not sure what is the root cause of this error and where to optimize?

1. Due to data not readily available, we calculate the Country and Continent based on geo-location. The Country code will be delivered with the Events in the future. This Calculation happens for every event as part of the base search.

2. Number of Events: The dashboards are set to 'last 30 days' as default. Due to this, the Events to be handled and the size are big.

3. Alerts / Past search jobs: The Search results of the Alerts are stored for 24 hours. And, the Search job results are configured to be stored for 10 minutes. The searches related to Dashboard and the Alerts are mostly reporting/monitoring in nature. Hence, mostly aggregating the events and the final Search results would not be big.

Could someone please clarify which would be causing this Memory issue?

We could always request more disk space for the Developers who create the Dashboards. But, we would like to avoid the same issue coming up to Users as well. Hence, I would like to understand what could be the root cause of this issue.

Labels (1)
0 Karma
1 Solution

Richfez
SplunkTrust
SplunkTrust

Disk usage varies through time.  While a search is running and accumulating results, it may require *a lot more* disk space for that phase than it needs after it's finished running the search.  It depends on the searches (mostly on how much "work" the SH needs to do to interweave results), the size of the returned data, your architecture, and other factors.

So a search may require, in some cases, 100 MB of space while running and accumulating results from the indexers, but then finalize to only 12 MB when done.

I believe converting these to saved reports and accelerating them can alleviate most of this.  https://docs.splunk.com/Documentation/Splunk/8.0.6/Knowledge/Manageacceleratedsearchsummaries

As could using a data model and accelerating that. 
https://docs.splunk.com/Documentation/Splunk/8.0.6/Knowledge/Managedatamodels

Either of the above will also make those dashboards load way faster, and use far fewer resources when doing so.

Happy Splunking,

Rich

View solution in original post

Richfez
SplunkTrust
SplunkTrust

Disk usage varies through time.  While a search is running and accumulating results, it may require *a lot more* disk space for that phase than it needs after it's finished running the search.  It depends on the searches (mostly on how much "work" the SH needs to do to interweave results), the size of the returned data, your architecture, and other factors.

So a search may require, in some cases, 100 MB of space while running and accumulating results from the indexers, but then finalize to only 12 MB when done.

I believe converting these to saved reports and accelerating them can alleviate most of this.  https://docs.splunk.com/Documentation/Splunk/8.0.6/Knowledge/Manageacceleratedsearchsummaries

As could using a data model and accelerating that. 
https://docs.splunk.com/Documentation/Splunk/8.0.6/Knowledge/Managedatamodels

Either of the above will also make those dashboards load way faster, and use far fewer resources when doing so.

Happy Splunking,

Rich

Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...