Deployment Architecture

500MB diskUsage limit damage?

Yarsa
Path Finder

Hi,
I was wondering how harmful is it to get near that limit in a single search query?
If some of my searches take more than a few minutes to return, should I be questioning the way I built them?

By the way I am already using summary indexes and jobs in other places.

Tags (1)
0 Karma

bmacias84
Champion

@yarsa, Not sure of your hardware configuration, how many concurrent searches/users, or searches. Reaching that limit isn't too harmful other than slow searches and possible slower indexing if your hardware resources have high utilization.

TroubleshootingSearchQuotas

  1. I would install SOS (Splunk On Splunk), this will help determine which searches/users are eating up your resources.
  2. Once you identified your malformed Searches using the "Seach Job Inspector" to find where your highest Excecution costs are.
  3. Also you may want to limit how time range and number of searches each users can run
  4. If you find that your dashboards are taking up most of your searches, you many want to invest time in trying to combine your searches into post-process searches.
  5. Also look at how you have your indexes broken out.
  6. Evaluate your saved searches. Can some of those be ran over night or off hours.
  7. Review your disk metrics (Disk Q length, read/write per sec, etc).
  8. Consider using bloom filters.

I also recommend obtaining a copy of Exploring Splunk.

Seach Tips from "Exploring Splunk":

  1. Filter out unneeded fields as soon as possible
  2. Filter out results before calculations
  3. Turn of Field Discovery.
  4. Use Advanced Charting view over Timeline view. Timeline has higher costs.

Other things to note dense searches are faster than sparse search. Rare term search have high IO cost. Low cardinality seaches are also faster.

Additional Reading:

Bloomfilters

SearchJobInspector

OptimizeSearchSpeed

Exploring_Splunk

OptimizeSplunkforpeakperformance

Typesofsearches Other types of searches "Super-­‐Sparse" and "Rare Term"

PostProcessSeaches

Hope this helps you.

DaveSavage
Builder

Yarsa - I've never heard of issues on single search limits after all that is big data is all about, but would certainly check out your 'expensive searches', check which ones are machine intensive (Search>Status>Search...etc). You can also set the default time period a search covers changing it from 'All Time' to something more reasonable. If you run a lot of adhoc searches, see some results of interest, then also finalise the search...unless you need the full set...hope this helps!
Br
D

0 Karma
Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...