Splunk Observability Cloud

SignalFlow timestamp does not match metric finder export timestamp

JohnGregg
Path Finder

I used the metric finder to graph jvm.gc.duration_count, then exported the results to CSV.  I also have a SignalFlow API call to grab the same data.

The counts are the same except they are offset by 5 minutes.  IOW, my SignalFlow output says 303 GCs at 15:11 but the metric finder export shows the same 303 GCs at 15:16.  Subsequent periods are offset in the same way.

My code is using ChannelMessage.DataMessage.getLogicalTimestampMs().

Postman output looks like this:

data: {
data: "data" : [ {
data: "tsId" : "AAAAAMcvg8Q",
data: "value" : 1.0
data: }, {
data: "tsId" : "AAAAAKgFlvo",
data: "value" : 303.0
data: } ],
data: "logicalTimestampMs" : 1750709460000,
data: "maxDelayMs" : 12000
data: }

What's going on?

 

thanks

 

Labels (1)
0 Karma

bishida
Splunk Employee
Splunk Employee

I think what is happening is that one timestamp is reflecting the end of the roll-up window and the other is reflecting the beginning. If you need them to align, you may need to subtract the value of the roll-up window to make this happen. To verify this theory, you may want to experiment with different roll-up periods and see if the difference is always equal to the roll-up.

JohnGregg
Path Finder

Thanks, I think you're on to something.

In the UI, I set the resolution to low, which was 2 minutes for a 1-hour window.  I changed my query to set the start and end times to the same hour range and set the resolution to 2 minutes (I was using null before.)

Now I get matching results.

But before I set the resolution, which result was "correct?"  What I'm actually doing is recording metrics like GC counts, CPU usage, etc, during a performance test in LoadRunner so I can use LR's Analysis tool to combine the data LR produces with the data APM produces.  I plan to run the query once per minute as the test runs and grab the most recent minute of data.  Will I actually get the most recent minute?  I'm assuming that since the UI appeared to lag, then the query results are closer to being correct.

thanks

0 Karma

bishida
Splunk Employee
Splunk Employee

OK, cool, thanks for the more info. That’s giving me more confidence that what you were originally seeing is a reflection of data roll-ups. That being said, I think the “more recent” timestamp is “correct” because the value at that timestamp represents a roll-up of the past X amount of time.

For your use-case and reliably grabbing the “past minute”, I wonder if it would be a good idea to make that minute well-defined by specifying start_time and end_time instead of just “-1m” so you avoid edge cases where a datapoint might arrive late for some reason out of your control (network latency, java agent metric export, etc). So maybe the minute you query is something like from (now -2 mins) to (now -1 min).

Once the data arrives at the Observability Cloud ingest endpoint, I don’t think you have to worry about any delay with ingest. The data will be recorded as it streams in even when something like a chart visualization appears to have a delay in drawing. I’d be more concerned about any potential latency from the point in time that the metric is recorded (e.g. garbage collection in the java agent) to the time it takes for the agent to export that datapoint and the time it takes for that datapoint to traverse the network to the ingest endpoint. The timestamp on the datapoint will reflect the time it was recorded even if takes extra time for that datapoint to arrive at ingest (e.g., datapoint is recorded by java agent at 16:04:01 but arrives at ingest endpoint at 16:04:45 due to some temporary network condition)

0 Karma
Get Updates on the Splunk Community!

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

🔐 Trust at Every Hop: How mTLS in Splunk Enterprise 10.0 Makes Security Simpler

From Idea to Implementation: Why Splunk Built mTLS into Splunk Enterprise 10.0  mTLS wasn’t just a checkbox ...