Question 1:
In my org have Splunk ES 7.2.X with 4 VMs(win os) i.e., 1 Search Head, 1 Deployment server, 2 Indexers
Search Head:
In search head we installed Splunk Add-on for Amazon Web Services and configured and getting logs in splunk that logs are saving in index (main) search head under defaultdb/db and i didnt set the buckets retension policy. So can you please help me what is the exact indexes.conf to set the retension policy for deletion more than 1year logs.
Question 2:
I integrated some servers logs(haddop, mulesof, forgerock) to splunk these are indexing in index(main). When i look the indexes.conf file i was shocked there is no indexes.conf file anywhere. i have check some in my way i found _cluster/indexes.conf, in this saw the script like [main] -> repfactor = 0
By seeing this i guess to know that this is cluster indexer so it have repfactor = 0.
So can you please help me what is the exact indexes.conf to set the retension policy for deletion more than 1year logs in cluster indexer.
The short answer to question 1 is frozenTimePeriodInSecs = 31536000
.
The short answer to question 2 is the retention policy can be set in any indexes.conf file, but belongs best in the file that defines the index itself. Use splunk btool --debug indexes list
to find the pertinent file.
The longer answer addresses some misconceptions.
First, in a distributed environment (separate search head and indexer(s)), all instances except indexers should forward all output to the indexers. This makes sense when you think of the indexers as the storage layer - any data to be stored should be in the storage layer.
Second, search heads should not be running inputs. They should manage searches. Inputs should come from universal or heavy forwarders.
Third, there must be an indexes.conf file somewhere. Splunk likely would not start without one. Use the CLI command splunk btool --debug indexes list
to see all of your indexes.conf settings and the files from which they came.
Fourth, the presence of _cluster/indexes.conf does not mean your indexers are clustered. That you did not mention a Cluster Master instance tells me you're most likely not clustered. To verify, run splunk btool server list clustering
and look for the value of the mode
setting. If it's set to "disabled" then you are not clustered.
I check with the given command, here is the output
D:\Splunk\bin>splunk btool server list clustering
[clustering]
access_logging_for_heartbeats = false
allow_default_empty_p4symmkey = true
allowed_hbmiss_count = 3
auto_rebalance_primaries = true
available_sites =
backup_and_restore_primaries_in_maintenance = false
buckets_per_addpeer = 1000
buckets_to_summarize = primaries
commit_retry_time = 300
constrain_singlesite_buckets = true
cxn_timeout = 60
decommission_force_finish_idle_time = 0
decommission_node_force_timeout = 300
decommission_search_jobs_wait_secs = 180
enableS2SHeartbeat = true
executor_workers = 10
generation_poll_interval = 5
heartbeat_period = 1
heartbeat_timeout = 60
idle_connections_pool_size = -1
local_executor_workers = 10
maintenance_mode = false
manual_detention = off
master_uri = https://10.128.162.121:8089
max_auto_service_interval = 30
max_fixup_time_ms = 0
max_nonhot_rep_kBps = 0
max_peer_build_load = 2
max_peer_rep_load = 5
max_peer_sum_rep_load = 5
max_peers_to_download_bundle = 0
max_primary_backups_per_service = 10
max_replication_errors = 3
mode = slave
multisite = false
notify_scan_min_period = 10
notify_scan_period = 10
pass4SymmKey =
percent_peers_to_restart = 10
quiet_period = 60
rcv_timeout = 60
re_add_on_bucket_request_error = false
rebalance_threshold = 0.90
register_forwarder_address =
register_replication_address =
register_search_address =
rep_cxn_timeout = 60
rep_max_rcv_timeout = 180
rep_max_send_timeout = 180
rep_rcv_timeout = 60
rep_send_timeout = 60
replication_factor = 3
reporting_delay_period = 30
restart_timeout = 60
rolling_restart = restart
s2sHeartbeatTimeout = 600
search_factor = 2
search_files_retry_timeout = 600
searchable_target_sync_timeout = 60
searchable_targets = true
send_timeout = 60
service_interval = 0
service_jobs_msec = 100
site_mappings =
site_replication_factor = origin:2, total:3
site_search_factor = origin:1, total:2
summary_registration_batch_size = 1000
summary_replication = false
summary_update_batch_size = 10
summary_wait_time = 660
target_wait_time = 150
throwOnBucketBuildReadError = false
use_batch_mask_changes = true
The mode is showing slave
Now what i have to do, my intention is to delete the more than one year logs will be delete and need space in my 😧 drive. tell me how to do and what to do.
So you have an indexer cluster. I'll assume your deployment server is also functioning as a Cluster Master.
One key setting for managing how much disk space is used is frozenTimePeriodInSecs. Set it to a value no larger than 31536000 to limit data to 1 year or less. Do this in $SPLUNK_HOME\etc\master-app_cluster\indexes.conf and then run splunk apply cluster-bundle
to send the change to the indexers.