Splunk Search

Your maximum disk usage quota has been reached - WHat does this mean?

mctester
Communicator

When I try to run a search in Splunk Web, I see this error message -

Your maximum disk usage quota has been reached. usage=6000MB quota=1000MB The search was not run. SearchId=123456789.3 

What does this message mean and how can I get rid of it? It's suddenly happening for all of my searches.

Tags (3)
1 Solution

Mick
Splunk Employee
Splunk Employee

The problem here is that by default, Splunk limits the amount of disk-space a user can use while searching. This is to ensure that all resources are not consumed by one user running lots of inefficient searches, and it is documented here. This page will also tell you how to increase the quota in authorize.conf for a specific role.

Searches run in a seperate process to splunkd, so just because splunkd is killed/restarted doesn't necessarily mean that the searches themselves will also disappear.

If you want to get a view into what searches are running, you can click on the Jobs link in the top right corner and see all the current and recent searches by a user. Any jobs that persist on disk will take up space, and I believe the default lifetime of a search object is 15 minutes.

If you clear some space by deleting the jobs, the messages will cease.

The message being displayed is produced when a User tries kick off a new search. Splunk will first check the disk quota in use from previous searches to ensure that it has enough space to store results in $SPLUNK_HOME/var/run/splunk. If you run 'du -sh *' on that directory, you'll be able to identify the searches that are using so much space and manually remove them. It's likely that the directories no longer exist at this point unless you have set the retention time pretty high.

In the current version of 4.1, there is no 'self-monitoring' capability for searches, so they will continue to run and eat up disk space until finished. The disk-quota check is run just once at the beginning of the search to ensure the space exists and the search can be run. If the check finds that the quota has been reached, the new search will NOT run but previous large searches will not be finalized automatically. Also, real-time searches have specific windows and buffers to ensure that they don't take up excessive disk-space so we don't expect them to be an issue in this case, just the historic searches.

In the upcoming version, 4.2, searches will be able to monitor themselves and auto-finalize when the quota is reached and notifications will be pushed to Users via the UI and logs.

View solution in original post

arosario258
New Member

Wondering if I can get more information about @tlagatta’s comment at the bottom of this thread. Will the ES SH performance really be affected by Admin disk quota usage the same way SH performance is stated to be affected by non-admin disk quota usage in the comment?

0 Karma

tlagatta_splunk
Splunk Employee
Splunk Employee

I had this problem tonight on my local machine. I solved this by increasing the admin user's srchDiskQuota from 10000 to 100000. To do this, I created the file /etc/system/local/authorize.conf, and added the stanza:

 [role_admin]
 srchDiskQuota = 100000

Be careful about increasing this quota for non-admin users, as this can severely hamper performance.

yannK
Splunk Employee
Splunk Employee

Mick
Splunk Employee
Splunk Employee

The problem here is that by default, Splunk limits the amount of disk-space a user can use while searching. This is to ensure that all resources are not consumed by one user running lots of inefficient searches, and it is documented here. This page will also tell you how to increase the quota in authorize.conf for a specific role.

Searches run in a seperate process to splunkd, so just because splunkd is killed/restarted doesn't necessarily mean that the searches themselves will also disappear.

If you want to get a view into what searches are running, you can click on the Jobs link in the top right corner and see all the current and recent searches by a user. Any jobs that persist on disk will take up space, and I believe the default lifetime of a search object is 15 minutes.

If you clear some space by deleting the jobs, the messages will cease.

The message being displayed is produced when a User tries kick off a new search. Splunk will first check the disk quota in use from previous searches to ensure that it has enough space to store results in $SPLUNK_HOME/var/run/splunk. If you run 'du -sh *' on that directory, you'll be able to identify the searches that are using so much space and manually remove them. It's likely that the directories no longer exist at this point unless you have set the retention time pretty high.

In the current version of 4.1, there is no 'self-monitoring' capability for searches, so they will continue to run and eat up disk space until finished. The disk-quota check is run just once at the beginning of the search to ensure the space exists and the search can be run. If the check finds that the quota has been reached, the new search will NOT run but previous large searches will not be finalized automatically. Also, real-time searches have specific windows and buffers to ensure that they don't take up excessive disk-space so we don't expect them to be an issue in this case, just the historic searches.

In the upcoming version, 4.2, searches will be able to monitor themselves and auto-finalize when the quota is reached and notifications will be pushed to Users via the UI and logs.

gkanapathy
Splunk Employee
Splunk Employee

Search results are preserved on disk until their expiration time. If you background a search, the expiration time is "never", so it must be manually deleted. This persists through restarts. For non-backgrounded searches, the expiration time is 10 minutes.

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...