Splunk Search

scaling epoch timestamps results in strange values

pdovy
New Member

I've got a sourcetype which captures data for two nearly identical applications, the difference being that one calculates timestamps as microsecond epochs and the other as nanosecond epochs. I am using queries to do some latency analysis, so I'd like to scale up the microsecond epochs so that the results are all in the same units.

I'm trying to do the following in my query:

| eval SCALED_REQUEST_TIME = if(REQUEST_TIME > 10000000000000000, REQUEST_TIME, REQUEST_TIME * 1000)

However I get some pretty strange results, namely that for microsecond timestamps that get scaled by this line, the last 3 (new) digits are arbitrary, for example:

1321545903871484

becomes

1321545903871483904

I've tried using convert with num() to convert it before hand, and using asnumber() in the eval, but I get the same result regardless.

Tags (2)
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Ah, I wonder if you're having trouble because this arithmetic is being done using floating point on the processor, which makes it subject to rounding problems. (It happens that 16 decimal digits about the limit for double-precision FP numbers.) For something like this you would really want to use arbitrary precision integers, or 64-bit integers (or higher). I don't know if you can coerce this in Splunk, however, do I don't know if I have solution for you.

0 Karma
Get Updates on the Splunk Community!

Accelerating Observability as Code with the Splunk AI Assistant

We’ve seen in previous posts what Observability as Code (OaC) is and how it’s now essential for managing ...

Integrating Splunk Search API and Quarto to Create Reproducible Investigation ...

 Splunk is More Than Just the Web Console For Digital Forensics and Incident Response (DFIR) practitioners, ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...