We are working on/ developing 4-5 Dashboards with around 10 Charts in each Dashboard. When we work on multiple Dashboards, we frequently face the Maximum Disk Usage error. And, there are also 10+ Alerts running every day and the Search results are configured to be stored for 24 hours.
I am not sure what is the root cause of this error and where to optimize?
1. Due to data not readily available, we calculate the Country and Continent based on geo-location. The Country code will be delivered with the Events in the future. This Calculation happens for every event as part of the base search.
2. Number of Events: The dashboards are set to 'last 30 days' as default. Due to this, the Events to be handled and the size are big.
3. Alerts / Past search jobs: The Search results of the Alerts are stored for 24 hours. And, the Search job results are configured to be stored for 10 minutes. The searches related to Dashboard and the Alerts are mostly reporting/monitoring in nature. Hence, mostly aggregating the events and the final Search results would not be big.
Could someone please clarify which would be causing this Memory issue?
We could always request more disk space for the Developers who create the Dashboards. But, we would like to avoid the same issue coming up to Users as well. Hence, I would like to understand what could be the root cause of this issue.
Disk usage varies through time. While a search is running and accumulating results, it may require *a lot more* disk space for that phase than it needs after it's finished running the search. It depends on the searches (mostly on how much "work" the SH needs to do to interweave results), the size of the returned data, your architecture, and other factors.
So a search may require, in some cases, 100 MB of space while running and accumulating results from the indexers, but then finalize to only 12 MB when done.
I believe converting these to saved reports and accelerating them can alleviate most of this. https://docs.splunk.com/Documentation/Splunk/8.0.6/Knowledge/Manageacceleratedsearchsummaries
As could using a data model and accelerating that.
https://docs.splunk.com/Documentation/Splunk/8.0.6/Knowledge/Managedatamodels
Either of the above will also make those dashboards load way faster, and use far fewer resources when doing so.
Happy Splunking,
Rich
Disk usage varies through time. While a search is running and accumulating results, it may require *a lot more* disk space for that phase than it needs after it's finished running the search. It depends on the searches (mostly on how much "work" the SH needs to do to interweave results), the size of the returned data, your architecture, and other factors.
So a search may require, in some cases, 100 MB of space while running and accumulating results from the indexers, but then finalize to only 12 MB when done.
I believe converting these to saved reports and accelerating them can alleviate most of this. https://docs.splunk.com/Documentation/Splunk/8.0.6/Knowledge/Manageacceleratedsearchsummaries
As could using a data model and accelerating that.
https://docs.splunk.com/Documentation/Splunk/8.0.6/Knowledge/Managedatamodels
Either of the above will also make those dashboards load way faster, and use far fewer resources when doing so.
Happy Splunking,
Rich