<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How can I clear old logs when they hit 220 GB? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290954#M55497</link>
    <description>&lt;P&gt;theres no such file in&lt;BR /&gt;
$SPLUNK_HOME\etc\system\local\indexes.conf&lt;/P&gt;

&lt;P&gt;indexes.conf excisetes only in the default folder&lt;/P&gt;</description>
    <pubDate>Wed, 16 Aug 2017 13:15:34 GMT</pubDate>
    <dc:creator>eladelad</dc:creator>
    <dc:date>2017-08-16T13:15:34Z</dc:date>
    <item>
      <title>How can I clear old logs when they hit 220 GB?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290950#M55493</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;
My Splunk gets bigger and bigger every day.&lt;BR /&gt;
I'm using only 3-4 modules.&lt;BR /&gt;
The thing is that every change I'm applying on indexes.conf (Windows install)&lt;BR /&gt;
doesn't work..&lt;BR /&gt;
There are 220 gigs for logs but seems that the logs increase to max and don't hold up to the conf settings&lt;/P&gt;

&lt;P&gt;I just want to clear old logs when they hit 220 gigs. I don't care at this time  about "older" than 220 gigs.&lt;/P&gt;

&lt;P&gt;Your help needed.&lt;/P&gt;

&lt;P&gt;These are my settings:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;# DO NOT EDIT THIS FILE!

################################################################################
# "global" params (not specific to individual indexes)
################################################################################
sync = 0
indexThreads = auto
memPoolMB = auto
defaultDatabase = main
enableRealtimeSearch = true
suppressBannerList = 
maxRunningProcessGroups = 8
maxRunningProcessGroupsLowPriority = 1
bucketRebuildMemoryHint = auto
serviceOnlyAsNeeded = true
serviceSubtaskTimingPeriod = 30
maxBucketSizeCacheEntries = 0
processTrackerServiceInterval = 1
hotBucketTimeRefreshInterval = 10

################################################################################
# index specific defaults
################################################################################
maxDataSize = auto_high_volume
maxWarmDBCount = 300
frozenTimePeriodInSecs = 15552000
rotatePeriodInSecs = 60
coldToFrozenScript = 
coldToFrozenDir = 
compressRawdata = true
maxTotalDataSizeMB = 250000
maxMemMB = 5
maxConcurrentOptimizes = 6
maxHotSpanSecs = 7776000
maxHotIdleSecs = 0
maxHotBuckets = 3
quarantinePastSecs = 25920000
quarantineFutureSecs = 2592000
rawChunkSizeBytes = 131072
minRawFileSyncSecs = disable
assureUTF8 = false
serviceMetaPeriod = 25
partialServiceMetaPeriod = 0
throttleCheckPeriod = 15
syncMeta = true
maxMetaEntries = 1000000
maxBloomBackfillBucketAge = 30d
enableOnlineBucketRepair = true
enableDataIntegrityControl = false
maxTimeUnreplicatedWithAcks = 60
maxTimeUnreplicatedNoAcks = 300
minStreamGroupQueueSize = 2000
warmToColdScript= 
tstatsHomePath = volume:_splunk_summaries\$_index_name\datamodel_summary
homePath.maxDataSizeMB = 0
coldPath.maxDataSizeMB = 0
streamingTargetTsidxSyncPeriodMsec = 5000
journalCompression = gzip
enableTsidxReduction = false
tsidxReductionCheckPeriodInSec = 600
timePeriodInSecBeforeTsidxReduction = 604800

#
# By default none of the indexes are replicated.
#
repFactor = 0

[volume:_splunk_summaries]
path = $SPLUNK_DB



################################################################################
# index definitions
################################################################################

[main]
homePath   = $SPLUNK_DB\defaultdb\db
coldPath   = $SPLUNK_DB\defaultdb\colddb
thawedPath = $SPLUNK_DB\defaultdb\thaweddb
tstatsHomePath = volume:_splunk_summaries\defaultdb\datamodel_summary
maxMemMB = 20
maxConcurrentOptimizes = 6
maxHotIdleSecs = 86400
maxHotBuckets = 10
maxDataSize = auto_high_volume

[history]
homePath   = $SPLUNK_DB\historydb\db
coldPath   = $SPLUNK_DB\historydb\colddb
thawedPath = $SPLUNK_DB\historydb\thaweddb
tstatsHomePath = volume:_splunk_summaries\historydb\datamodel_summary
maxDataSize = 10
frozenTimePeriodInSecs = 9604800

[summary]
homePath   = $SPLUNK_DB\summarydb\db
coldPath   = $SPLUNK_DB\summarydb\colddb
thawedPath = $SPLUNK_DB\summarydb\thaweddb
tstatsHomePath = volume:_splunk_summaries\summarydb\datamodel_summary

[_internal]
homePath   = $SPLUNK_DB\_internaldb\db
coldPath   = $SPLUNK_DB\_internaldb\colddb
thawedPath = $SPLUNK_DB\_internaldb\thaweddb
tstatsHomePath = volume:_splunk_summaries\_internaldb\datamodel_summary
maxDataSize = 1000
maxHotSpanSecs = 432000
frozenTimePeriodInSecs = 2592000

[_audit]
homePath   = $SPLUNK_DB\audit\db
coldPath   = $SPLUNK_DB\audit\colddb
thawedPath = $SPLUNK_DB\audit\thaweddb
tstatsHomePath = volume:_splunk_summaries\audit\datamodel_summary

[_thefishbucket]
homePath   = $SPLUNK_DB\fishbucket\db
coldPath   = $SPLUNK_DB\fishbucket\colddb
thawedPath = $SPLUNK_DB\fishbucket\thaweddb
tstatsHomePath = volume:_splunk_summaries\fishbucket\datamodel_summary
maxDataSize = 500
frozenTimePeriodInSecs = 2419200

# this index has been removed in the 4.1 series, but this stanza must be
# preserved to avoid displaying errors for users that have tweaked the index's
# size/etc parameters in local/indexes.conf.
#
[splunklogger]
homePath   = $SPLUNK_DB\splunklogger\db
coldPath   = $SPLUNK_DB\splunklogger\colddb
thawedPath = $SPLUNK_DB\splunklogger\thaweddb
disabled = true

[_introspection]
homePath   = $SPLUNK_DB\_introspection\db
coldPath   = $SPLUNK_DB\_introspection\colddb
thawedPath = $SPLUNK_DB\_introspection\thaweddb
maxDataSize = 1024
frozenTimePeriodInSecs = 1209600
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Wed, 16 Aug 2017 09:29:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290950#M55493</guid>
      <dc:creator>eladelad</dc:creator>
      <dc:date>2017-08-16T09:29:18Z</dc:date>
    </item>
    <item>
      <title>Re: How can I clear old logs when they hit 220 GB?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290951#M55494</link>
      <description>&lt;P&gt;The config file you've shared is the default settings.  Please also share your changes.  They should be in $SPLUNK_HOME\etc\system\local\indexes.conf.  Did you restart Splunk after making config changes?&lt;/P&gt;</description>
      <pubDate>Wed, 16 Aug 2017 13:05:30 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290951#M55494</guid>
      <dc:creator>richgalloway</dc:creator>
      <dc:date>2017-08-16T13:05:30Z</dc:date>
    </item>
    <item>
      <title>Re: How can I clear old logs when they hit 220 GB?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290952#M55495</link>
      <description>&lt;P&gt;Each of your indexes is being allotted 250GB from the global parameter &lt;CODE&gt;maxTotalDataSizeMB = 250000&lt;/CODE&gt;. &lt;/P&gt;

&lt;P&gt;I believe what you are looking for is to keep 220GB total as a sum across all of your indexes, right?&lt;/P&gt;

&lt;P&gt;Check out using volume configs to ensure that Splunk uses a 220GB total volume across all your indexes and rolls off the oldest data when the disk space reaches that limit. And ensure you make the changes in a custom app ie. &lt;CODE&gt;$SPLUNK_HOME/etc/apps/&amp;lt;myIndexConfigs&amp;gt;/local&lt;/CODE&gt; or in &lt;CODE&gt;$SPLUNK_HOME/etc/system/local/indexes.conf&lt;/CODE&gt;. Never edit default files, you will lose them after upgrade. &lt;/P&gt;

&lt;P&gt;&lt;A href="http://docs.splunk.com/Documentation/Splunk/6.6.2/Admin/Configurationfiledirectories"&gt;http://docs.splunk.com/Documentation/Splunk/6.6.2/Admin/Configurationfiledirectories&lt;/A&gt;&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;Important: Never change or copy the configuration files in the default directory. Default files must remain intact and in their original location. The Splunk Enterprise upgrade process overwrites the default directory, so any changes that you make in the default directory are lost on upgrade. Changes that you make in non-default configuration directories, such as $SPLUNK_HOME/etc/system/local or $SPLUNK_HOME/etc/apps/&amp;lt;app_name&amp;gt;/local, persist through upgrades.
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;&lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Configureindexstoragesize#Configure_index_size_with_volumes"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Configureindexstoragesize#Configure_index_size_with_volumes&lt;/A&gt;&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;# global settings

# Inheritable by all indexes: no hot/warm bucket can exceed 1 TB.
# Individual indexes can override this setting.
homePath.maxDataSizeMB = 1000000

# volumes

[volume:caliente]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000

[volume:frio]
path = /mnt/big_disk
maxVolumeDataSizeMB = 1000000

# indexes

[i1]
homePath = volume:caliente/i1
# homePath.maxDataSizeMB is inherited from the global setting
coldPath = volume:frio/i1
# coldPath.maxDataSizeMB not specified anywhere: 
# This results in no size limit - old-style behavior

[i2]
homePath = volume:caliente/i2
homePath.maxDataSizeMB = 1000  
# overrides the global default
coldPath = volume:frio/i2
coldPath.maxDataSizeMB = 10000  
# limits the size of cold buckets

[i3]
homePath = /old/style/path
homePath.maxDataSizeMB = 1000
coldPath = volume:frio/i3
coldPath.maxDataSizeMB = 10000
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Wed, 16 Aug 2017 13:06:06 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290952#M55495</guid>
      <dc:creator>mattymo</dc:creator>
      <dc:date>2017-08-16T13:06:06Z</dc:date>
    </item>
    <item>
      <title>Re: How can I clear old logs when they hit 220 GB?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290953#M55496</link>
      <description>&lt;P&gt;do not change the indexes.conf in the default directory&lt;BR /&gt;
change the indexes.conf in the local directory&lt;BR /&gt;
you can create a volume for all your indexes and set a limit to it:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:myVolume]
path = /path/to/data/
maxVolumeDataSizeMB = 220000
#220 GB
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;now add the volume to your indexes path&lt;BR /&gt;
for example:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;  [_introspection]
    homePath = volume:myVolume\db
    coldPath = volume:myVolume\colddb
    thawedPath = $SPLUNK_DB_introspection\thaweddb
    maxDataSize = 1024
    frozenTimePeriodInSecs = 1209600
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;hope it helps&lt;/P&gt;</description>
      <pubDate>Wed, 16 Aug 2017 13:10:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290953#M55496</guid>
      <dc:creator>adonio</dc:creator>
      <dc:date>2017-08-16T13:10:50Z</dc:date>
    </item>
    <item>
      <title>Re: How can I clear old logs when they hit 220 GB?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290954#M55497</link>
      <description>&lt;P&gt;theres no such file in&lt;BR /&gt;
$SPLUNK_HOME\etc\system\local\indexes.conf&lt;/P&gt;

&lt;P&gt;indexes.conf excisetes only in the default folder&lt;/P&gt;</description>
      <pubDate>Wed, 16 Aug 2017 13:15:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290954#M55497</guid>
      <dc:creator>eladelad</dc:creator>
      <dc:date>2017-08-16T13:15:34Z</dc:date>
    </item>
    <item>
      <title>Re: How can I clear old logs when they hit 220 GB?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290955#M55498</link>
      <description>&lt;P&gt;create one&lt;/P&gt;</description>
      <pubDate>Wed, 16 Aug 2017 13:24:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290955#M55498</guid>
      <dc:creator>adonio</dc:creator>
      <dc:date>2017-08-16T13:24:48Z</dc:date>
    </item>
    <item>
      <title>Re: How can I clear old logs when they hit 220 GB?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290956#M55499</link>
      <description>&lt;P&gt;If local\indexes.conf doesn't exist, you must create it and make your changes there.&lt;BR /&gt;
Don't just copy the entire default\indexes.conf file to local.  Instead, copy only the lines you want to change along with their associated stanza names (lines in [brackets]).&lt;/P&gt;</description>
      <pubDate>Wed, 16 Aug 2017 13:27:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-can-I-clear-old-logs-when-they-hit-220-GB/m-p/290956#M55499</guid>
      <dc:creator>richgalloway</dc:creator>
      <dc:date>2017-08-16T13:27:25Z</dc:date>
    </item>
  </channel>
</rss>

