<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Buffer overflow for field? in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Buffer-overflow-for-field/m-p/125734#M33984</link>
    <description>&lt;P&gt;I don't work for Splunk, so can't answer this with 100% certainty, but the fact that the first 15 digits are fine and the last 9 are random suggests that internally Splunk is storing numeric field values as IEEE double precision floating point values, which have 15 digits of accuracy.&lt;/P&gt;

&lt;P&gt;Splunk doesn't corrupt text fields that are longer than 15 characters, so one workaround would be to put a letter at the beginning of your 1s and 0s so that the overall field is text rather than numeric.  Presumably you're testing individual bits to see if they're 0 or 1 using substr() or similar, so you could still do this if there was a letter at the beginning of the field.&lt;/P&gt;</description>
    <pubDate>Tue, 05 Nov 2013 17:42:06 GMT</pubDate>
    <dc:creator>dmr195</dc:creator>
    <dc:date>2013-11-05T17:42:06Z</dc:date>
    <item>
      <title>Buffer overflow for field?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Buffer-overflow-for-field/m-p/125733#M33983</link>
      <description>&lt;P&gt;Hi !&lt;/P&gt;

&lt;P&gt;I would like to know if there are any restricted size value for fields.&lt;/P&gt;

&lt;P&gt;I have tried to create a fields consisting &lt;/P&gt;

&lt;P&gt;24 0's and 1 &lt;BR /&gt;
10000000000000000000000000&lt;/P&gt;

&lt;P&gt;result was&lt;/P&gt;

&lt;P&gt;10000000000000000905969664&lt;/P&gt;

&lt;P&gt;It seems like a bufferover flow or something.&lt;/P&gt;

&lt;P&gt;Any help is appreciated!&lt;/P&gt;

&lt;P&gt;Thanks,&lt;BR /&gt;
Yu&lt;/P&gt;</description>
      <pubDate>Fri, 01 Nov 2013 12:46:54 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Buffer-overflow-for-field/m-p/125733#M33983</guid>
      <dc:creator>yuwtennis</dc:creator>
      <dc:date>2013-11-01T12:46:54Z</dc:date>
    </item>
    <item>
      <title>Re: Buffer overflow for field?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Buffer-overflow-for-field/m-p/125734#M33984</link>
      <description>&lt;P&gt;I don't work for Splunk, so can't answer this with 100% certainty, but the fact that the first 15 digits are fine and the last 9 are random suggests that internally Splunk is storing numeric field values as IEEE double precision floating point values, which have 15 digits of accuracy.&lt;/P&gt;

&lt;P&gt;Splunk doesn't corrupt text fields that are longer than 15 characters, so one workaround would be to put a letter at the beginning of your 1s and 0s so that the overall field is text rather than numeric.  Presumably you're testing individual bits to see if they're 0 or 1 using substr() or similar, so you could still do this if there was a letter at the beginning of the field.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Nov 2013 17:42:06 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Buffer-overflow-for-field/m-p/125734#M33984</guid>
      <dc:creator>dmr195</dc:creator>
      <dc:date>2013-11-05T17:42:06Z</dc:date>
    </item>
  </channel>
</rss>

