The Splunk for Palo Alto Networks app seems to be consuming all my disk space. I have the index set to a max size of 168gb, but there also seems to be a data model that also consumes about 171gb of space. Is there anyway to prevent this from using all our space?
The problem actually seems to be that it creates another database called datamodel_summary. The datamodel_summary is larger than the actual database.
/pan_logs# du -h -d 1
49G ./colddb
4.0K ./thaweddb
171G ./datamodel_summary
116G ./db
335G .
Why is datamodel_summary so large and how can I make it smaller?
I've changed it to only be 7 days based on this. http://answers.splunk.com/answers/136089/how-to-manage-datamodel-acceleration-storage-tstatshomepath...
But can I delete the current datamodel_summary directory and then have it rebuild it?
If you set the index proprieties on indexs.conf file for those indexes it will clean every particular days based on ur settings.The data will be moved from warm/hot to cold buckets so you never get any disk warnings.
default]
frozenTimePeriodInSecs = 7776000
homePath.maxDataSizeMB = 25000 (25gb)