Hi friends,
I am using the below search query to see the usage of a specific Index. When I pull the search for 30days, it is showing usage for a few days and then showing nothing for a few days. But, when I search for logs in the Index directly (QUERY: Index=ship) , I can see data for all days.
But for Indexes I am able to see complete usage for all 30 days. Can you help me debug this issue ?
QUERY:
index=_internal source=*metrics.log | eval GB=kb/(1024*1024) | search group="per_index_thruput" series=ship
When you say "I can see data for all days", are you talking about the _time
, or the _indextime
?
If the events are arriving in chunks, for instance from a once-a-week batch job, they will be indexed in chunks, even though they are spread out across the calendar.
Take a look at the indexing of the data using this...
your search that finds the actual records
| eval _time = _indextime
| timechart span=1h count
To be more specific, when I run below command, I can see usage numbers for all Indexes in my Cluster but for only ship Index I don't see numbers.
index=_internal source=*metrics.log | eval GB=kb/(1024*1024) | search group="per_index_thruput" | timechart span=1d sum(GB) by series limit=20
When I run this query for last 30days< I can see data for 5 or 6 days, rest is showing as empty (no numbers shown).
What's the retention period on _internal index and ship index?
For _internal we haven't set any retention period and for ship below is the config set.
[ship]
homePath = volume:primary/ship/db
coldPath = volume:primary/ship/colddb
thawedPath = $SPLUNK_DB/ship/thaweddb
frozenTimePeriodInSecs=7776000
maxDataSize=auto_high_volume
maxHotBuckets = 6
Just FYI: for all other Indexes we can see data for all 30days, only for this Index we see issue in pulling usage.