HI @zaks191 , Please consider the below points for the better performance in your environment. 1. Be Specific in Searches: Always use index= and sourcetype= and add unique terms early in your search string to narrow down data quickly. 2. Filter Early, Transform Late: Place filtering commands (like where, search) at the beginning and transforming commands (stats, chart) at the end of your SPL. 3.Leverage Index-Time Extractions: Ensure critical fields are extracted at index time for faster searching, especially with JSON data. 4.Utilize tstats: For numeric or indexed data, tstats is highly efficient as it operates directly on pre-indexed data (.tsidx files), making it much faster than search | stats. 5.Accelerate Data Models: Define and accelerate data models for frequently accessed structured data. This pre-computes summaries, allowing tstats searches to run extremely fast. 6.Accelerate Reports: For specific, repetitive transforming reports, enable report acceleration to store pre-computed results. 7.Minimize Wildcards and Regex: Avoid leading wildcards (*term) and complex, unanchored regular expressions as they are resource-intensive. 8.Optimize Lookups: For large lookups, consider KV Store lookups or pre-generate summaries via scheduled searches. 9.Use Job Inspector: Regularly analyze slow searches with the Job Inspector to pinpoint bottlenecks (e.g., search head vs. indexer processing). 10.Review limits.conf (Carefully): While not a primary fix, review settings like max_mem_usage_mb or max_keymap_rows in limits.conf after monitoring resource usage, but proceed with caution and thorough testing. 11.Setup Alerts for Expensive searches: use internal metrics to detect problematic searches 12.Monitor and Limit User Search Concurrency: Users running unbounded or wide time-range ad hoc searches can harm performance. Happy Splunking
... View more