<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Should I &amp;quot;normalize&amp;quot; data prior to indexing? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305285#M57616</link>
    <description>&lt;P&gt;Without regard to your question, or the discussion about indexing volumes, you can use a regex to extract the &lt;CODE&gt;fn:23l4dixr&lt;/CODE&gt; portion to a different field at index or search time, your choice.  Why wouldn't you?&lt;/P&gt;</description>
    <pubDate>Sun, 10 Sep 2017 20:20:15 GMT</pubDate>
    <dc:creator>DalJeanis</dc:creator>
    <dc:date>2017-09-10T20:20:15Z</dc:date>
    <item>
      <title>Should I "normalize" data prior to indexing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305280#M57611</link>
      <description>&lt;P&gt;I have the opportunity to pull in some ticket system data and create some statistics / visualizations. The data consists of many “categories”. However, there are some details in the SUMMARY field that keep me from grouping/counting etc by SUMMARY as the SUMMARY value is unique in the last couple of characters.  Here’s a sample of the SUMMARY field data&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;Pastebin extraction fn:23l4dixr
Pastebin extraction fn:xx3l9dib
Pastebin extraction fn:dk244diL
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;I would like to group/count by "&lt;STRONG&gt;Pastebin extraction&lt;/STRONG&gt;". First attempt (successful) was to built regexes that I applied to the file BEFORE pulling into splunk that removes the unique fn:xxxxxxxx at the end of the SUMMARY field. I then created a separate index and pulled the data in using the CSV sourcetype. Due to the column headers, it appears splunk had no issues parsing the field data. This allowed me to group/count which was a good learning experience in and of itself. But now, I have no details if I need them.&lt;/P&gt;

&lt;P&gt;It seems that most folks likely don’t massage data prior to a forwarder picking up the data. Perhaps then, the normalization, if you will, occurs just prior to indexing? Or perhaps during query? Maybe it’s possible either way?&lt;/P&gt;

&lt;P&gt;At any rate, I’d appreciate a breadcrumb / link to some reading on how to remove the step of  pre-processing of the data and to perform this a bit further down the line. &lt;/P&gt;

&lt;P&gt;Is learning to properly use props.conf and transforms.conf my only (or best) approach? &lt;/P&gt;

&lt;P&gt;What if I want to retain the unique details “just-in-case” and don’t want it removed prior to indexing?&lt;/P&gt;

&lt;P&gt;Apologies if my terminology is not up to snuff.. just getting started with Splunk.&lt;/P&gt;

&lt;P&gt;Thanks,&lt;BR /&gt;
Sudsy&lt;/P&gt;</description>
      <pubDate>Wed, 30 Aug 2017 04:42:06 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305280#M57611</guid>
      <dc:creator>msutfin1</dc:creator>
      <dc:date>2017-08-30T04:42:06Z</dc:date>
    </item>
    <item>
      <title>Re: Should I "normalize" data prior to indexing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305281#M57612</link>
      <description>&lt;P&gt;Hi msutfin1,&lt;BR /&gt;
usually you don't need to normalize your logs before indexing.&lt;BR /&gt;
It could be useful to transform some logs (if your specifications permit to modify logs) if you have some specific needs (e.g. to mask some logs like passwords or Credit Card numbers, ...) or if the format of your logs is variable and sometimes wrong (e.g. I receive logs from multiple sources and sometimes someone of them have a wrong date format).&lt;BR /&gt;
Usually you can extract your fields using regexes.&lt;BR /&gt;
I hope to be useful for you.&lt;BR /&gt;
Bye.&lt;BR /&gt;
Giuseppe&lt;/P&gt;</description>
      <pubDate>Wed, 30 Aug 2017 10:09:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305281#M57612</guid>
      <dc:creator>gcusello</dc:creator>
      <dc:date>2017-08-30T10:09:44Z</dc:date>
    </item>
    <item>
      <title>Re: Should I "normalize" data prior to indexing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305282#M57613</link>
      <description>&lt;P&gt;@msutfin1, Splunk reads time-series data. So most important thing for you to dictate Splunk about your data is to tell it (1) &lt;STRONG&gt;how to identify time&lt;/STRONG&gt; and (2) &lt;STRONG&gt;how to break events&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;You do this through either built in sourcetype (for industry standard logs already defined in Splunk) or custom sourcetype (for your custom logs or use case). Sourcetype provides Splunk with "schema on the fly". For example fields are extracted, transformed, aliased and calculated based on which sourcetype they belong to. So in other words you should be definitely well versed with &lt;CODE&gt;props.conf&lt;/CODE&gt; and &lt;CODE&gt;transforms.conf&lt;/CODE&gt;. &lt;/P&gt;

&lt;P&gt;Even if you do not define timestamp and event breaks correctly, most of the cases Splunk's automatic/default logic does the job for you. But if it fails data might not be indexed as you expect it to. So it is always best to take some sample logs in a file and upload the data on test/POC Splunk machine and ensure in the data preview mode that the data is getting indexed the way you expect.&lt;BR /&gt;
Refer to some of the documentations: &lt;BR /&gt;
&lt;STRONG&gt;Configure event line breaking&lt;/STRONG&gt;: &lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking"&gt;https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking&lt;/A&gt;&lt;BR /&gt;
&lt;STRONG&gt;Gettig Data In Tutorial&lt;/STRONG&gt;: &lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/GetthetutorialdataintoSplunk"&gt;https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/GetthetutorialdataintoSplunk&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Having said these, ideally splunk should not drop final piece of your event by default unless you have newline character before that (Splunk usese newline character \n\r as default LINE_BREAKER as you might have seen in props.conf). So you might need to define event line properly. For us to assist you you might have to add complete sample events (mock/anonymize sensitive information in your data before posting).&lt;/P&gt;</description>
      <pubDate>Wed, 30 Aug 2017 10:34:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305282#M57613</guid>
      <dc:creator>niketn</dc:creator>
      <dc:date>2017-08-30T10:34:29Z</dc:date>
    </item>
    <item>
      <title>Re: Should I "normalize" data prior to indexing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305283#M57614</link>
      <description>&lt;P&gt;-- It seems that most folks likely don’t massage data prior to a forwarder picking up the data. Perhaps then, the normalization, if you will, occurs just prior to indexing? Or perhaps during query? Maybe it’s possible either way?&lt;/P&gt;

&lt;P&gt;You are absolutely right - &lt;STRONG&gt;most folks likely don’t massage data prior to a forwarder picking up the data.&lt;/STRONG&gt;&lt;BR /&gt;
Then, when they hit the license limit at 100TB +, they wonder what went wrong. Splunk as a company, chose to encourage us to stream data as is, and wonder about normalizations, validations, schema association a bit later. I'm not clear why...&lt;/P&gt;

&lt;P&gt;Yesterday, I attended a demo of the open source competitor,  &lt;STRONG&gt;Graylog&lt;/STRONG&gt;, which encourages to do the exact opposite. So, I guess the right answer might be somewhere in between. Maybe, we should stream data as is into dev, understand it, handle it and when all is normalized, validated, etc. we can stream it to production... &lt;/P&gt;</description>
      <pubDate>Wed, 30 Aug 2017 14:28:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305283#M57614</guid>
      <dc:creator>ddrillic</dc:creator>
      <dc:date>2017-08-30T14:28:05Z</dc:date>
    </item>
    <item>
      <title>Re: Should I "normalize" data prior to indexing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305284#M57615</link>
      <description>&lt;P&gt;The Splunk philosophy is send data in exactly the way it is.  When you need to schematize, do it at search.  If that is too slow, then normalize everything at search, pull it into a &lt;CODE&gt;datamodel&lt;/CODE&gt; using &lt;CODE&gt;eventtypes&lt;/CODE&gt; and &lt;CODE&gt;tags&lt;/CODE&gt; and accelerate that and use &lt;CODE&gt;tstats&lt;/CODE&gt;.  That is pretty much what the &lt;CODE&gt;CIM&lt;/CODE&gt; is:&lt;/P&gt;

&lt;P&gt;Read “Use the CIM to normalize data at search time” documentation page:&lt;BR /&gt;
&lt;A href="http://docs.splunk.com/Documentation/CIM/latest/User/UsetheCIMtonormalizedataatsearchtime"&gt;http://docs.splunk.com/Documentation/CIM/latest/User/UsetheCIMtonormalizedataatsearchtime&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Read the “Use the CIM to normalize OSSEC data” documentation page.&lt;BR /&gt;
Most of the time, maybe always, there will be an &lt;CODE&gt;app&lt;/CODE&gt; to help you assign a &lt;CODE&gt;sourcetype&lt;/CODE&gt; into a &lt;CODE&gt;datamodel&lt;/CODE&gt;, but sometimes we may have to do this ourselves. Even if you don’t, this page is both very short and highly educational so it is well worth the time. This shows us a minimal configuration that allows you to use &lt;CODE&gt;sourcetypes&lt;/CODE&gt; with the &lt;CODE&gt;CIM datamodels&lt;/CODE&gt;:&lt;BR /&gt;
&lt;A href="http://docs.splunk.com/Documentation/CIM/4.8.0/User/UsetheCIMtonormalizeOSSECdata"&gt;http://docs.splunk.com/Documentation/CIM/4.8.0/User/UsetheCIMtonormalizeOSSECdata&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 10 Sep 2017 20:08:30 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305284#M57615</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2017-09-10T20:08:30Z</dc:date>
    </item>
    <item>
      <title>Re: Should I "normalize" data prior to indexing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305285#M57616</link>
      <description>&lt;P&gt;Without regard to your question, or the discussion about indexing volumes, you can use a regex to extract the &lt;CODE&gt;fn:23l4dixr&lt;/CODE&gt; portion to a different field at index or search time, your choice.  Why wouldn't you?&lt;/P&gt;</description>
      <pubDate>Sun, 10 Sep 2017 20:20:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305285#M57616</guid>
      <dc:creator>DalJeanis</dc:creator>
      <dc:date>2017-09-10T20:20:15Z</dc:date>
    </item>
    <item>
      <title>Re: Should I "normalize" data prior to indexing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305286#M57617</link>
      <description>&lt;P&gt;That would certainly allow running stats on the former portion of the field. I will begin looking for documentation or perhaps a tutorial on how one does that.&lt;/P&gt;

&lt;P&gt;These are summary fields from a ticketing system. So each "type" of ticket (the above being 1 example) has something that makes it unique (incident number, vuln assessment tag, internal team that initiated it)&lt;/P&gt;

&lt;P&gt;Fortunately, these unique portions of each summary appear in the same position and have a distinct format, so writing regex for each type of field should be straight forward once I learn how to perform that "separation" at index time.&lt;/P&gt;

&lt;P&gt;It will be a bit time consuming, as there are 3-4 hundred summary "templates". Once done tho, the obligatory maintenance as new types of tickets are added and old types are deprecated shouldn't be unbearable.&lt;/P&gt;

&lt;P&gt;Thanks much..&lt;BR /&gt;
Mark&lt;/P&gt;</description>
      <pubDate>Sun, 10 Sep 2017 22:29:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Should-I-quot-normalize-quot-data-prior-to-indexing/m-p/305286#M57617</guid>
      <dc:creator>msutfin1</dc:creator>
      <dc:date>2017-09-10T22:29:24Z</dc:date>
    </item>
  </channel>
</rss>

