Knowledge Management

[SmartStore]Splunk S2 cacheman REST endpoint seems to shows incorrect bucket status?

rbal_splunk
Splunk Employee
Splunk Employee

REST endpoint /services/admin/cacheman shows wrong cm:bucket.status of buckets.

In cluster, we have 80TB of local storage in total and the disk util is 100% in all the 10 indexers. But when we query /services/admin/cacheman it was showing 394TB in local and 0.025TB in remote while today after reboot, it shows 396TB in remote and 0.309TB in local.

Also please note that we tried using the evict Rest endpoint to manually trigger a bucket eviction. The bucket folder got empty (contents deleted) but the /services/admin/cacheman "cm:bucket.status" still showed "local". So eviction happens but the status is incorrect.

It is my understanding, that Cache manager knows about all buckets in the system at all times, whether the bucket is cached or not. It will be nice to get access to that information along with meta data (such as bucket size and/or individual file sizes), regardless of the bucket being local or remote

dbinspect does expose most of this information but it can only be queried efficiently for a small set of buckets at a time. but we need a way to efficiently query it for all buckets in the system (millions of records), at the same time.

Tags (1)
0 Karma

srajarat2
Path Finder

I am assuming you are including "dedup title" on the search with "rest /services/admin/cacheman" as S3 keeps only a single copy of the buckets.

Meanwhile, the cm:bucket.estimated_size is the actual bucket size and there is no guarantee that the local storage would have all of it as buckets could have been evicted. Alternatively, adding the bucket size across all deduped buckets will give you the total S3 storage use.

|rest /services/admin/cacheman | dedup title | stats sum(cm:bucket.estimated_size) AS remoteDiskSize | eval remoteDiskSizeGB = round(remoteDiskSize / 1024 / 1024 / 1024, 2)

If you are still seeing the cm:bucket.status as local means there might be some files left out on that bucket like bucket_info.csv or Hosts.data or Sources.data etc.,

Try evicting the bucket using the following command.

splunk _internal call /services/admin/cacheman/"bid|index-name~id~guid|"/evict -method POST -auth un:pwd

Replace index-name, id and guid as follows.
If the bucket you wanted to evict under an index named apache is db_1572849480_1572840108_19_9A1BE399-F73B-4F0B-83E4-43F7959E3710 invoke as follows.

splunk _internal call /services/admin/cacheman/"bid|apache~19~9A1BE399-F73B-4F0B-83E4-43F7959E3710|"/evict -method POST

0 Karma

seegeekrun
Path Finder

Just a thought-- but wouldn't the cacheman only be aware of buckets that have rolled all the way to Cold and been called back because of a search?

0 Karma

hsesterhenn_spl
Splunk Employee
Splunk Employee

Hi,

nope.

We do copy the buckets to SmartStore after rolling from hot to warm and mark them for eviction on the secondary Indexers.

The cacheman is pretty much avare of almost every bucket.

 

BR,

Holger

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...