<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to get a random sampling from a large data set in Hunk? in All Apps and Add-ons</title>
    <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202916#M73250</link>
    <description>&lt;P&gt;Very kind of you Claw!!&lt;/P&gt;</description>
    <pubDate>Tue, 26 Apr 2016 14:52:52 GMT</pubDate>
    <dc:creator>ddrillic</dc:creator>
    <dc:date>2016-04-26T14:52:52Z</dc:date>
    <item>
      <title>How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202907#M73241</link>
      <description>&lt;P&gt;We have around two billion claims from roughly three years. The client is interested in good data sampling, of let's say, 100 claims.  They mention that with SQL, they do something like - &lt;CODE&gt;order by newid()&lt;/CODE&gt;, which fetches 100 random records from a table. &lt;BR /&gt;
Any ideas how to do something similar with Hunk/Splunk?&lt;/P&gt;</description>
      <pubDate>Thu, 14 Apr 2016 15:51:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202907#M73241</guid>
      <dc:creator>ddrillic</dc:creator>
      <dc:date>2016-04-14T15:51:17Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202908#M73242</link>
      <description>&lt;P&gt;In Hunk 6.4 sampling is done by files not events. The Hunk sampling flag is called vix.split.sample.rate &lt;/P&gt;

&lt;P&gt;When you set vix.split.sample.rate = 0.25 It means that each split has a 1 out of 4 probability of being accepted (it does not mean that every 4th split will be accepted). &lt;BR /&gt;
For large numbers of splits, this means that roughly 25% will be accepted, and that it will not be the same 25% each time, and that we are doing our best to make sure that iteration order does not determine which splits we accept. But for small numbers of splits, it’s hard to predict how many we get back.&lt;/P&gt;

&lt;P&gt;In Splunk 6.4 we also have the event sampling feature: &lt;A href="http://docs.splunk.com/Documentation/Splunk/6.4.0/Search/Retrieveasamplesetofevents"&gt;http://docs.splunk.com/Documentation/Splunk/6.4.0/Search/Retrieveasamplesetofevents&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Apr 2016 15:59:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202908#M73242</guid>
      <dc:creator>rdagan_splunk</dc:creator>
      <dc:date>2016-04-14T15:59:47Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202909#M73243</link>
      <description>&lt;P&gt;Great. So, if we want exactly 100 "good" samples from two billion events in Splunk 6.3.3, what should we do?&lt;/P&gt;</description>
      <pubDate>Fri, 15 Apr 2016 13:56:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202909#M73243</guid>
      <dc:creator>ddrillic</dc:creator>
      <dc:date>2016-04-15T13:56:18Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202910#M73244</link>
      <description>&lt;P&gt;This feature - if I am not mistaken - is available only in Hunk 6.4 not 6.3.3&lt;BR /&gt;
As we highlighted, the number of split is approximate not exactly. Therefore, for approximate 100, The value should be 100/2,000,000,000 = 0.00000005&lt;BR /&gt;
If you want to make sure they limit the search to only 100 you can add - in addition to the above, something like this - &lt;BR /&gt;
index=abc | stats count by source limit=100 | .. &lt;/P&gt;</description>
      <pubDate>Fri, 15 Apr 2016 16:47:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202910#M73244</guid>
      <dc:creator>rdagan_splunk</dc:creator>
      <dc:date>2016-04-15T16:47:39Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202911#M73245</link>
      <description>&lt;P&gt;Do you mean 100 events fairly sampled from a virtual index?  If so, the problem is that, in order to do it efficiently, you would need an external index of some sort. Since Hunk does not directly manage your data, it does not maintain an index. Hunk cannot know how many events are in each file in your virtual index without reading them, and it does not know the starting offsets of the events within a file, so it cannot do fair sampling without reading all the files that your search would hit. &lt;/P&gt;

&lt;P&gt;If you are willing to sample inefficiently, you could have Hunk read all the events and pipe them to a custom command that would do the sampling by randomly deciding whether to keep each event. I believe such a command is included in the Machine Learning Toolkit on Splunkbase. But if the whole reason you are sampling is to speed up the search, then this may not be what you want.&lt;/P&gt;</description>
      <pubDate>Fri, 15 Apr 2016 22:14:54 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202911#M73245</guid>
      <dc:creator>kschon_splunk</dc:creator>
      <dc:date>2016-04-15T22:14:54Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202912#M73246</link>
      <description>&lt;P&gt;kschon_splunk, let's keep in mind that the claims are spread across several years pretty much evenly. They also reside in 90 sqoop’s generated files. What can be a reasonable way to generate these 100 samples? Speed is not important. It's more important that the process would be able to process 2 billion claims without bailing out.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Apr 2016 16:02:01 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202912#M73246</guid>
      <dc:creator>ddrillic</dc:creator>
      <dc:date>2016-04-25T16:02:01Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202913#M73247</link>
      <description>&lt;P&gt;You still asking for a statistically verifiable data sample or are you using this to return a limited subset of the total data?&lt;/P&gt;

&lt;P&gt;Using this capability to return a limited subset of the data is more manageable than returning a statistically verifiable random sample of the events.&lt;/P&gt;

&lt;P&gt;In native Splunk we own the data storage algorithm and therefore can create a verifiable sample to the total records. In Hunk you are relying on Hadoop to store the data and therefore we can only deliver a verifiable random sample by returning random hadoop files. Any randomly selected hadoop file may contain zero or more records which meet the search criteria. This means that we can only approximate the total percentage of records returned from the random sample of Hadoop files examined.&lt;/P&gt;

&lt;P&gt;In fact, we will not know if the 25% sample of files mentioned above contains 10% of the records in question or 45% of the records because we do not know the distributions of target records within a hadoop file.&lt;/P&gt;

&lt;P&gt;Remember there is no inherent organization in a hadoop file. It is just whatever data your sources have put into it. &lt;/P&gt;

&lt;P&gt;Now, and here is the interesting point in this process, you could place all of the target searchable terms in a Splunk index by extracting them from Hadoop and indexing them together with the hadoop file name they are contained in and then you would gain all of the Splunk advantages and also keep the original data in hadoop. So searches become a two step process.&lt;/P&gt;

&lt;P&gt;Search the Splunk index to find the terms you want. &lt;BR /&gt;
Use the list of hadoop files that the terms you are interested and the dates that you are interested in to run a hunk search to return the actual records and then perform whatever other investigation on a much reduced subset of all your hadoop files.&lt;BR /&gt;
This means that all searches begin by selecting terms and dates that apply. Then running a final search to return the raw data. At that point, you can format the output to suit your needs in Hunk or export all of the returned results to your favorite tool.&lt;/P&gt;

&lt;P&gt;At the end of the first search, all of the statistics and sampling will work for those terms just as you requested.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Apr 2016 16:37:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202913#M73247</guid>
      <dc:creator>Claw</dc:creator>
      <dc:date>2016-04-25T16:37:44Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202914#M73248</link>
      <description>&lt;P&gt;In that case (i.e. speed is not important, you just need to make sure the query does not fail), then there are indeed some ways to do this. If I wanted a random sample of 1/1000th of my data-set, I could try this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=foo | eval synthId=random()/2147483647 | table _time synthId | search synthId &amp;lt; 0.001 | table _raw
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;—The middle command creates a new synthetic ID field called synthId. The description for the random() function (&lt;A href="http://docs.splunk.com/Documentation/Splunk/6.4.0/SearchReference/CommonEvalFunctions"&gt;http://docs.splunk.com/Documentation/Splunk/6.4.0/SearchReference/CommonEvalFunctions&lt;/A&gt;) states that it creates a psuedo-random number between 0 and 2^31-1 = 2147483647, so &lt;CODE&gt;random()/2147483647&lt;/CODE&gt; creates a pseudo-random decimal number between 0.0 and 1.0. I want 1/1000th, so I take events with a value less than 1/1000. &lt;BR /&gt;
—The final &lt;CODE&gt;table&lt;/CODE&gt; guarantees that filtering will happen on the task nodes, instead of bringing all events to the search head. Any “aggregating” command will do.&lt;/P&gt;

&lt;P&gt;In your case, we could try “search synthId &amp;lt; 0.00000005” (100 / 2 billion = 5*10-7). But it’s now reasonable to start worrying about round-off error, so it’s probably better if we calculate for ourselves that (2^31-1) * 100 / 2 billion = 107.4 , so we get:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=foo | eval synthId=random() | table _time synthId | search synthId &amp;lt;= 107 | ….
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;This will get you a sample that is approximately the right size, is different every time, and is statically correct. If you want a sample of exactly the right size, you could get too many items and take the first 100, e.g.:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=foo | eval synthId=random() | table _time synthId | search synthId &amp;lt;= 150 | head 100 | ….
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;This will slightly bias the sample based on iteration order. On the other hand, if you want a repeatable sample, you can use a hash instead of a random number. For example, you could use &lt;CODE&gt;synthId=tonumber(substr(md5(_raw), -8), 16)&lt;/CODE&gt; to get a pseudo-random number between 0 and 4294967296 that will be the same for a given event, every time you calculate it. Then you can use all the tricks above.&lt;/P&gt;

&lt;P&gt;Hopefully this helps.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Apr 2016 20:09:58 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202914#M73248</guid>
      <dc:creator>kschon_splunk</dc:creator>
      <dc:date>2016-04-25T20:09:58Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202915#M73249</link>
      <description>&lt;P&gt;For a relatively small table of 30 million we tried the following -&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=provider | eval rand=random() % 100 | where rand=0  | head 100
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;It seems to work just fine for this small data-set but I don't know whether it can be used for 2 billion claims... &lt;/P&gt;</description>
      <pubDate>Tue, 26 Apr 2016 14:52:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202915#M73249</guid>
      <dc:creator>ddrillic</dc:creator>
      <dc:date>2016-04-26T14:52:17Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202916#M73250</link>
      <description>&lt;P&gt;Very kind of you Claw!!&lt;/P&gt;</description>
      <pubDate>Tue, 26 Apr 2016 14:52:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202916#M73250</guid>
      <dc:creator>ddrillic</dc:creator>
      <dc:date>2016-04-26T14:52:52Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202917#M73251</link>
      <description>&lt;P&gt;The reason I didn't suggest something like that before is that random() picks a number uniformly distributed from 0 to 2,147,483,647. Numbers ending in 00 to 47 show up one more time in that range than the numbers 48 to 99. In this case, there are 21,474,836 full sets of numbers and one incomplete set, so it will be very hard to detect the difference. But if you did:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;...| eval rand=random() % 2000000000 where rand &amp;lt; 100
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Then everything from 0 to 147,483,647 will show up twice as often as everything from 147,483,648 to 2,147,483,647, and 00 - 99 will be significantly over-represented, and you will get too many events.&lt;/P&gt;</description>
      <pubDate>Tue, 26 Apr 2016 17:46:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202917#M73251</guid>
      <dc:creator>kschon_splunk</dc:creator>
      <dc:date>2016-04-26T17:46:27Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202918#M73252</link>
      <description>&lt;P&gt;This just proves that statistics hurts the brain. @kschon, we will try what you've suggested above. It's an elegant solution, and we'll look to couple it with the vix sample rate for performance.&lt;/P&gt;</description>
      <pubDate>Mon, 02 May 2016 19:05:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202918#M73252</guid>
      <dc:creator>jwiedemann_splu</dc:creator>
      <dc:date>2016-05-02T19:05:51Z</dc:date>
    </item>
    <item>
      <title>Re: How to get a random sampling from a large data set in Hunk?</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202919#M73253</link>
      <description>&lt;P&gt;I agree, statistics and brain health are at odds. Hopefully this will help. You can, of course, use the simpler solution and add a "head 100" command. It just won't spread the events around as evenly.&lt;/P&gt;</description>
      <pubDate>Mon, 02 May 2016 23:00:58 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-a-random-sampling-from-a-large-data-set-in-Hunk/m-p/202919#M73253</guid>
      <dc:creator>kschon_splunk</dc:creator>
      <dc:date>2016-05-02T23:00:58Z</dc:date>
    </item>
  </channel>
</rss>

