I'm using the map function to do a search on reach row of a table I've created with some IDs that link certain things together in the system/process I am trying to analyze.
The table holds 4 columns: ID1, Time1, ID2, Time2
When I pass these values to the map function as $ID1$ etc. they all work fine except for ID2 which is a large number prefixed by "RT".
e.g. RT201804171037017795. This kept showing up as "null" in the resulting events and hence lead to problems.
I realized that this "RT" means it might get recognized as a real time search value hence I trimmed the RT using the trim() function: trim(ID2, "RT") . So far so good.
However, now when I parse the number to the map function as $ID2$ and use it's value as a table field ...| eval Identifier2=$ID2$ | table Identifier2 | ... The resulting field is not 201804171037017795 but 201804171037017800. Because the number is so large I thougth it might be recognized as an Epoch time. This is probably the case as both 2018041710370177 and 2018041710370178 result in the same Epoch time (recognized as microseconds - according to epochconverter): Monday 12 December 2033 23:08:30.370.
Hence, the reason the number is rounded up is likely because Splunk thinks I'm giving it an Epoch time while it is simply a large identifier. Thus, my question (finally) is: How do I stop Splunk from recognizing this large number as a timestamp? I want to explicitly tell Splunk it is just a number or even a string and the value does not matter. It should be parsed as a string only for identification.
I've already tried toString(ID2) to no avail.
TL;DR: How to specifically tell Splunk how to handle a (large) value as a string/number and not as an Epoch time?
... View more