<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Very large number math in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Very-large-number-math/m-p/459133#M171234</link>
    <description>&lt;P&gt;Another solution is to get a "shorter ID" and calculations based on that&lt;/P&gt;

&lt;P&gt;Example&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;|makeresults
| eval sequence_number="6675670249450679850"
| rex field=sequence_number "(?&amp;lt;subSeq&amp;gt;\w{5})$"
| eval firstSeq=subSeq - 1
| table sequence_number,subSeq,firstSeq
&lt;/CODE&gt;&lt;/PRE&gt;</description>
    <pubDate>Tue, 14 May 2019 21:26:17 GMT</pubDate>
    <dc:creator>koshyk</dc:creator>
    <dc:date>2019-05-14T21:26:17Z</dc:date>
    <item>
      <title>Very large number math</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Very-large-number-math/m-p/459132#M171233</link>
      <description>&lt;P&gt;I have a log file with a very large number in it, it's a sequence number, and doesn't seem to have anything to do with time, they're all unique.  They look like:&lt;/P&gt;

&lt;P&gt;sequence_number&lt;BR /&gt;
6675670249450679850&lt;BR /&gt;
6675670249450679847&lt;BR /&gt;
6675670249450679801&lt;BR /&gt;
6675670249450679800&lt;BR /&gt;
6675670249450679653&lt;BR /&gt;
6675670249450679652&lt;BR /&gt;
6675670249450679645&lt;BR /&gt;
6675670249450679643&lt;BR /&gt;
6675670249450679642&lt;BR /&gt;
6675670249450679523&lt;BR /&gt;
6675670249450679522&lt;/P&gt;

&lt;P&gt;There's a relationship between logs when the numbers differ by 1, but the logs contain different information.  I'm trying to do a transaction to group these lines, but to get "sequence_number - 1", Splunk seems to round horribly.  I only really need to compare the least significant digits, so I have a workaround to create a field based on data in the higher number with:&lt;/P&gt;

&lt;P&gt;| eval subSeq=tonumber(substr(tostring(sequence_number), -6)), firstSeq=subSeq - 1&lt;/P&gt;

&lt;P&gt;And something similar to the other log type.  But, is there a better way?&lt;/P&gt;</description>
      <pubDate>Tue, 14 May 2019 21:04:55 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Very-large-number-math/m-p/459132#M171233</guid>
      <dc:creator>craigkleen</dc:creator>
      <dc:date>2019-05-14T21:04:55Z</dc:date>
    </item>
    <item>
      <title>Re: Very large number math</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Very-large-number-math/m-p/459133#M171234</link>
      <description>&lt;P&gt;Another solution is to get a "shorter ID" and calculations based on that&lt;/P&gt;

&lt;P&gt;Example&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;|makeresults
| eval sequence_number="6675670249450679850"
| rex field=sequence_number "(?&amp;lt;subSeq&amp;gt;\w{5})$"
| eval firstSeq=subSeq - 1
| table sequence_number,subSeq,firstSeq
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Tue, 14 May 2019 21:26:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Very-large-number-math/m-p/459133#M171234</guid>
      <dc:creator>koshyk</dc:creator>
      <dc:date>2019-05-14T21:26:17Z</dc:date>
    </item>
    <item>
      <title>Re: Very large number math</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Very-large-number-math/m-p/459134#M171235</link>
      <description>&lt;P&gt;Yeah, that gets me to the same place.  But gets a little unwieldy in my use case.  To expand my original data with an example, it's like:&lt;/P&gt;

&lt;P&gt;eventNum,Data,sequence_number&lt;BR /&gt;
eventOne,origData,6675670249450679850&lt;BR /&gt;
eventOne,origData,6675670249450679847&lt;BR /&gt;
eventOne,origData,6675670249450679801&lt;BR /&gt;
eventTwo,extradata,6675670249450679800&lt;BR /&gt;
eventOne,origData,6675670249450679653&lt;BR /&gt;
eventTwo,extradata,6675670249450679652&lt;BR /&gt;
eventOne,origData,6675670249450679645&lt;BR /&gt;
eventOne,origData,6675670249450679643&lt;BR /&gt;
eventTwo,extradata,6675670249450679642&lt;BR /&gt;
eventOne,origData,6675670249450679523&lt;BR /&gt;
eventTwo,extradata,6675670249450679522&lt;/P&gt;

&lt;P&gt;Where, I'm trying to add the fields in "extraData" to the fields in "origData", when the only thing I have coupling this data is this sequence_number that's off by one.&lt;/P&gt;

&lt;P&gt;So, it seems shorter use a single statement like:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;| eval commonSeq=if(eventNum="eventOne", tonumber(substr(tostring(sequence_number), -6)) - 1, tonumber(substr(tostring(sequence_number), -6))) | transaction commonSeq
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Was really hoping that there was more of a function based approach I'm missing, rather than a rex-based one.&lt;/P&gt;</description>
      <pubDate>Tue, 14 May 2019 21:47:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Very-large-number-math/m-p/459134#M171235</guid>
      <dc:creator>craigkleen</dc:creator>
      <dc:date>2019-05-14T21:47:45Z</dc:date>
    </item>
  </channel>
</rss>

