Hi all,
I just had a thought, not really looked into to much (work commitments) so I thought I would ask the (Splunk) world, however, apologies if this is something documented...
Are there any controls over data being falsely inserted into the index through a replication of buckets?
To expand on this a little, if someone wanted to skew results, they could potentially copy buckets from there own indexes to a production instance and rebuild the production indexes. Obviously, access controls would be the big factor here in protection and potentially auditing of the indexes, but are there any controls in place from Splunk's point of view?
Thanks,
Matt
Did you read the docs?
http://docs.splunk.com/Documentation/Splunk/latest/Security/
http://wiki.splunk.com/Community:DeployHardenedSplunk
and this;
http://docs.splunk.com/Documentation/ES/2.4/Install/Configuredataprotection
or it did not answer your question ?
Did you read the docs?
http://docs.splunk.com/Documentation/Splunk/latest/Security/
http://wiki.splunk.com/Community:DeployHardenedSplunk
and this;
http://docs.splunk.com/Documentation/ES/2.4/Install/Configuredataprotection
or it did not answer your question ?
Agreed, thanks for the valued input! 🙂
(in these times, with the right resources, time and mindset - everything is possible)
My guess is that it comes down to how the certificates and how the actual signing / hashning of the events written to disk is handled (i.e what combination of for exemple host/mac-address/password/cert to generate ). Hopefully some developer could share some insight ... ? 🙂
Thanks, it kind of answers the question, however I'm not sure how well this works, e.g. if someone was able to replicate the same settings (if they have compromised the server, then chances are they can read the conf files) on their Splunk instance before copying index buckets. Maybe its just something I need to test out. But I was wondering if there was a conclusive answer.