Monitoring Splunk

Different Amount of Events per UF with different searches

sscholz
Explorer

Hello guys,

Iam creating a dashboard which show some statistics about the UFs of our environment.

By finding a good solution for the amount of events delivered per index, I noticed something I cant explain at the moment. Hopefully you can bring light in the dark. 😉

For my understanding:

# The amount of indexed events on the indexer by the forwarder itself

| tstats count as eventcount where index=* OR index=_* host=APP01 earliest=-60m@m latest=now by index, sourcetype | stats sum(eventcount) as eventcount by index

indexeventcount
_internal11608
win1337

 

# The amount of events which are forwarded by the forwarder 

index=_internal component=Metrics host=APP01 series=* NOT series IN (main) group=per_index_thruput
| stats sum(ev) AS eventcount by series

serieseventcount
_internal1243
win2876

 

But both of them are delivering different values for the same timerange (60min)

Has anyone an idea why this is happening?

Thanks.

BR, Tom

0 Karma
1 Solution

codebuilder
Influencer

In your first query you are looking at all events, for all internal and non-internal indexes.

In your second query you are looking only at _internal, and have it further delimited to only the Metrics component and the per_index_thruput group.

That is why you are seeing different results. Essentially, you are not comparing apples to apples, so to speak.

----
An upvote would be appreciated and Accept Solution if it helps!

View solution in original post

0 Karma

codebuilder
Influencer

In your first query you are looking at all events, for all internal and non-internal indexes.

In your second query you are looking only at _internal, and have it further delimited to only the Metrics component and the per_index_thruput group.

That is why you are seeing different results. Essentially, you are not comparing apples to apples, so to speak.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

sscholz
Explorer

Thank you for clarification.

It seems that i had apples on my eyes. 😞

...

 

Greetings.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...