Security

Daily Scheduled Data Integrity Workaround

cemiam
Path Finder

Hi,

I am looking for an alternative workaround for computing hashes for buckets. It is saying it can be computed for a specific data volume but I am not able to find a way to do this for daily scheduled way. Is there any way to do this?

https://docs.splunk.com/Documentation/Splunk/6.6.2/Security/Dataintegritycontrol

Best Regards,
Cem

Tags (1)
0 Karma
1 Solution

adonio
Ultra Champion

looks like that if you configure your buckets to rotate every day (24 hours) maxHotSpanSecs = 86400
the above configuration is in indexes.conf and is on a per index basis. be very careful when using that settings. read more here: https://docs.splunk.com/Documentation/Splunk/6.6.2/Admin/Indexesconf
then you will get a new hash per bucket once it rolls to warm in the l2Hash file as decriebd in the link you provided:
"When you enable data integrity control, Splunk Enterprise computes hashes on every slice of newly indexed raw data and writes it to a l1Hashes file. When the bucket rolls from hot to warm, Splunk Enterprise computes a hash on the contents of the l1Hashes and stores the computed hash in l2Hash. Both hash files are stored in the rawdata directory for that bucket."
if you track the l2Hash files i assume you will have the daily hash
note: I never tried it before, it is just theory
hope it helps

View solution in original post

0 Karma

adonio
Ultra Champion

looks like that if you configure your buckets to rotate every day (24 hours) maxHotSpanSecs = 86400
the above configuration is in indexes.conf and is on a per index basis. be very careful when using that settings. read more here: https://docs.splunk.com/Documentation/Splunk/6.6.2/Admin/Indexesconf
then you will get a new hash per bucket once it rolls to warm in the l2Hash file as decriebd in the link you provided:
"When you enable data integrity control, Splunk Enterprise computes hashes on every slice of newly indexed raw data and writes it to a l1Hashes file. When the bucket rolls from hot to warm, Splunk Enterprise computes a hash on the contents of the l1Hashes and stores the computed hash in l2Hash. Both hash files are stored in the rawdata directory for that bucket."
if you track the l2Hash files i assume you will have the daily hash
note: I never tried it before, it is just theory
hope it helps

0 Karma

cemiam
Path Finder

Hi,

Thanks for the response. I think that workaround will resolve our issue. I will first test it on our environment then apply it to production.

Thanks and Best Regards,
Cem

0 Karma

adonio
Ultra Champion

very good, will convert to an answer then
please let the community know how it worked out for you.
if it sums it up, kindly accept the answer and upvote any comments that were helpful

cheers

adonio
Ultra Champion

hello there,
the documents imply that the hashes are enabled (and created) on an index level, e.g. per index (name) not per volume.
it also implies that the hashes are computed per slice (size) of data and not by schedule.
what is the problem you are trying to solve?
hope t helps a little

0 Karma

cemiam
Path Finder

Hi,

I wanted to mean slice by volume. We can do it per slice (size) but we need to do it per day. I just want to make sure that is there any workaround to get hash of the daily indexed data. We have a system integration for data integrity regulations and need to provide hash of the indexed data per day.

Best Regards,
Cem

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...