Hello,
For solid reasons that I can't go into here, we have a topology of...
AWS CloudWatch-> Kinesis Firehose -> AWS Delivery Stream Object ->AWS Lambda ->HEC listener on a Heavy Forwarder -> That Heavy Forwarder -> Another Heavy Forwarder -> Splunk Cloud. I'm pretty sure that (apart from having 1 HF forward to a second before hitting Splunk Cloud), that is the reference architecture for CloudWatch events.
There is no Splunk indexing going on in our infrastructure. We are just forwarding loads of information to Splunk Cloud for indexing and analysis there.
We can establish latency through most of that chain, but we are interesting in determining the latency from when our events land in Splunk Cloud, to those events being visible for analysis. Is there a handy metric or query we can re-use?
Thanks in advance...
Booo! But thank you for the answer, it wil lsave me looking for a thing that doesn't exist!
Booo! But thank you for the answer, it wil lsave me looking for a thing that doesn't exist!
Hi @ChaoticMike,
if you can, please vote for this idea at https://ideas.splunk.com/ideas/EID-I-1731
Ciao.
Giuseppe
Hi @ChaoticMike,
in Splunk you have:
so you could calculate a difference between these two fields:
index=*
| eval diff=_indextime-_time
| stats
avg(diff) AS diff_avg
max(diff) AS diff_max
min(diff) AS diff_min
BY index
Ciao.
Giuseppe
Thanks Giuseppe. Our problem is we aren't sure if our latency is in the forwarding chain, or within Splunk Cloud. We can indeed determine the end-to-end latency, but we are trying to drill into each hop. Does anyone know of a way to do that? It sounds... 'tricky'!
Hi @ChaoticMike,
there isn't a track of steps (I asked this on Splunk Ideas), so you can calculate only the global latency.
Ciao.
Giuseppe