We are running cluster envioronment and splunkd is getting killed so frequently in all the indexers with oom error.can you please suggest how i can rectify this..
Fyi splunk running in linux machine.below are my configuaration.
total used free shared buffers cached
Mem: 11 10 0 0 0 9
-/+ buffers/cache: 0 10
Swap: 3 0 3
Unfortunately, you genuinely require memory increase
Please find minimum spec details for indexers https://docs.splunk.com/Documentation/Splunk/7.2.6/Capacity/Referencehardware
It all depends on how much data you ingesting, How many TA's you have etc.
A mid range spec is and I guess you may need it.
Intel 64-bit chip architecture
24 CPU cores at 2GHz or greater speed per core
64GB RAM
Disk subsystem capable of a minimum of 800 average IOPS
A 1Gb Ethernet NIC, with optional second NIC for a management network
A 64-bit Linux or Windows distribution
If you want do a short term quick fix, the plan is
1. reduce as much searches/concurrent searches as possible. Do disable as many searches within your application
2. Remove all apps with savedsearches.conf and see if it works and then introduce each app/TA/SA one by one.
Unfortunately, you genuinely require memory increase
Please find minimum spec details for indexers https://docs.splunk.com/Documentation/Splunk/7.2.6/Capacity/Referencehardware
It all depends on how much data you ingesting, How many TA's you have etc.
A mid range spec is and I guess you may need it.
Intel 64-bit chip architecture
24 CPU cores at 2GHz or greater speed per core
64GB RAM
Disk subsystem capable of a minimum of 800 average IOPS
A 1Gb Ethernet NIC, with optional second NIC for a management network
A 64-bit Linux or Windows distribution
If you want do a short term quick fix, the plan is
1. reduce as much searches/concurrent searches as possible. Do disable as many searches within your application
2. Remove all apps with savedsearches.conf and see if it works and then introduce each app/TA/SA one by one.