Monitoring Splunk

How do I determine a users current disk quota?

Lowell
Super Champion

I recently ran into some issues with user's disk quota. I've increased the defaults a bit, but I can't seem to find an easy way to determine a users current usage. (Actually, I couldn't find any jobs for the user, so something weird was happening.)

Is there a REST call for this? Obviously this information is somewhere, because I can see in splunkd.log the current disk space used when a user violates their limits. It would be nice to be able to know if a user is getting close to their limit. Or, after they've hit a limit to know how much spaces they've cleaned up.

Is there any way to do this (preferable, from within Splunk)?

Tags (3)
1 Solution

_d_
Splunk Employee
Splunk Employee

Try this search on your search head:

| rest splunk_server=local /services/search/jobs | eval diskUsageMB=diskUsage/1024/1024 | stats sum(diskUsageMB) by eai:acl.owner

Hope this helps.

d.

View solution in original post

_d_
Splunk Employee
Splunk Employee

Try this search on your search head:

| rest splunk_server=local /services/search/jobs | eval diskUsageMB=diskUsage/1024/1024 | stats sum(diskUsageMB) by eai:acl.owner

Hope this helps.

d.

Lowell
Super Champion

In my initial use case, there were no jobs listed for the user who supposedly used up their quota when I looked in the "Job Manager", which is why I was wondering if there was a more direct way to get at this value. (But perhaps, internally, each splunk process iterates the entire dispatch folder and sums it up as you search does. I'm not sure.) I had created that user only earlier that afternoon.

0 Karma

Lowell
Super Champion

The rest search shown here never seems to return more than 500 results. But there are more than 500 directories under $SPLUNK_HOME/var/run/splunk/dispatch so I must be hitting some kind of limit somewhere.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...