<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: File monitoring questions (top item change) in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/File-monitoring-questions-top-item-change/m-p/72237#M14733</link>
    <description>&lt;P&gt;This is not simple but can be achieved.&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt; first, index your events as multiline events by creating a specific sourcetype 
the sourcetype has to be in props.conf or defined using the preview.
and specify how you want the events to be broken.&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;Here we want each &lt;CODE&gt;000000 SIZE = 099999 LOOP = 000000 WDTH = 001536 NWNO = 0004&lt;/CODE&gt;&lt;BR /&gt;
to be the beginning of a new event, so we will break on the line with SIZE&lt;/P&gt;

&lt;PRE&gt;
[test]
NO_BINARY_CHECK=1
BREAK_ONLY_BEFORE=\d+ SIZE =
SHOULD_LINEMERGE=true
pulldown_type=1
MAX_EVENTS=256
# we do not expect more than 256 lines per event.
&lt;/PRE&gt;

&lt;UL&gt;
&lt;LI&gt;&lt;P&gt;second index your events using the correct sourcetype&lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;P&gt;third during the search do the field extraction for the second part of the events.&lt;BR /&gt;
using multikv, each line will be considered as a different events (but the fields from the first line will be common)&lt;BR /&gt;
then you can extract the fields from the line (using | as a separator)&lt;BR /&gt;
and finally, remove the first line to avoid confusion.&lt;/P&gt;&lt;/LI&gt;
&lt;/UL&gt;

&lt;PRE&gt;
`source="*test.log" sourcetype=test |multikv noheader=t 
| rex "(?&lt;ID&gt;\d+)[^|]*" 
| rex "(?&lt;FIELDA&gt;\d+)\s\|\s(?&lt;FIELDB&gt;\w+)\s\|\s(?&lt;FIELDC&gt;[^\|]*)\s\|\s(?&lt;FIELDD&gt;\w+)\s\|\s(?&lt;FIELDE&gt;\w+)"  
| search NOT "LOOP" 
| table ID SIZE LOOP WDTH NWNO fieldA fieldB fieldC fieldD fieldE  _raw
&lt;/FIELDE&gt;&lt;/FIELDD&gt;&lt;/FIELDC&gt;&lt;/FIELDB&gt;&lt;/FIELDA&gt;&lt;/ID&gt;&lt;/PRE&gt;</description>
    <pubDate>Tue, 25 Dec 2012 20:39:14 GMT</pubDate>
    <dc:creator>yannK</dc:creator>
    <dc:date>2012-12-25T20:39:14Z</dc:date>
    <item>
      <title>File monitoring questions (top item change)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/File-monitoring-questions-top-item-change/m-p/72236#M14732</link>
      <description>&lt;P&gt;File monitoring questions&lt;/P&gt;

&lt;P&gt;Monitoring Point is, the log file&lt;BR /&gt;
The peculiar form of the log file to the log record.&lt;/P&gt;

&lt;H1&gt;Log format&lt;/H1&gt;

&lt;P&gt;000000 SIZE = 099999 LOOP = 000000 WDTH = 001536 NWNO = 0004&lt;BR /&gt;
00001 | AIX | 6.1 | LCID | xxx&lt;BR /&gt;
00002 | AIX | 6.1 | LCID | xxx&lt;BR /&gt;
00003 | AIX | 6.1 | LCID | xxx&lt;BR /&gt;
00004 | AIX | 6.1 | LCID | xxx&lt;/P&gt;

&lt;P&gt;Log record while NWNO item changes occur in the above log.&lt;BR /&gt;
To collect duplicate indexing problems occur.&lt;/P&gt;

&lt;P&gt;For example, the search results&lt;/P&gt;

&lt;P&gt;index = temp 00001&lt;/P&gt;

&lt;P&gt;00001 | AIX | 6.1 | LCID | xxx&lt;BR /&gt;
00001 | AIX | 6.1 | LCID | xxx&lt;BR /&gt;
00001 | AIX | 6.1 | LCID | xxx&lt;BR /&gt;
00001 | AIX | 6.1 | LCID | xxx&lt;/P&gt;

&lt;P&gt;Is indexing event&lt;/P&gt;

&lt;P&gt;Could there be a way to solve the problem?&lt;/P&gt;</description>
      <pubDate>Mon, 24 Dec 2012 05:25:28 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/File-monitoring-questions-top-item-change/m-p/72236#M14732</guid>
      <dc:creator>jcisha</dc:creator>
      <dc:date>2012-12-24T05:25:28Z</dc:date>
    </item>
    <item>
      <title>Re: File monitoring questions (top item change)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/File-monitoring-questions-top-item-change/m-p/72237#M14733</link>
      <description>&lt;P&gt;This is not simple but can be achieved.&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt; first, index your events as multiline events by creating a specific sourcetype 
the sourcetype has to be in props.conf or defined using the preview.
and specify how you want the events to be broken.&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;Here we want each &lt;CODE&gt;000000 SIZE = 099999 LOOP = 000000 WDTH = 001536 NWNO = 0004&lt;/CODE&gt;&lt;BR /&gt;
to be the beginning of a new event, so we will break on the line with SIZE&lt;/P&gt;

&lt;PRE&gt;
[test]
NO_BINARY_CHECK=1
BREAK_ONLY_BEFORE=\d+ SIZE =
SHOULD_LINEMERGE=true
pulldown_type=1
MAX_EVENTS=256
# we do not expect more than 256 lines per event.
&lt;/PRE&gt;

&lt;UL&gt;
&lt;LI&gt;&lt;P&gt;second index your events using the correct sourcetype&lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;P&gt;third during the search do the field extraction for the second part of the events.&lt;BR /&gt;
using multikv, each line will be considered as a different events (but the fields from the first line will be common)&lt;BR /&gt;
then you can extract the fields from the line (using | as a separator)&lt;BR /&gt;
and finally, remove the first line to avoid confusion.&lt;/P&gt;&lt;/LI&gt;
&lt;/UL&gt;

&lt;PRE&gt;
`source="*test.log" sourcetype=test |multikv noheader=t 
| rex "(?&lt;ID&gt;\d+)[^|]*" 
| rex "(?&lt;FIELDA&gt;\d+)\s\|\s(?&lt;FIELDB&gt;\w+)\s\|\s(?&lt;FIELDC&gt;[^\|]*)\s\|\s(?&lt;FIELDD&gt;\w+)\s\|\s(?&lt;FIELDE&gt;\w+)"  
| search NOT "LOOP" 
| table ID SIZE LOOP WDTH NWNO fieldA fieldB fieldC fieldD fieldE  _raw
&lt;/FIELDE&gt;&lt;/FIELDD&gt;&lt;/FIELDC&gt;&lt;/FIELDB&gt;&lt;/FIELDA&gt;&lt;/ID&gt;&lt;/PRE&gt;</description>
      <pubDate>Tue, 25 Dec 2012 20:39:14 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/File-monitoring-questions-top-item-change/m-p/72237#M14733</guid>
      <dc:creator>yannK</dc:creator>
      <dc:date>2012-12-25T20:39:14Z</dc:date>
    </item>
    <item>
      <title>Re: File monitoring questions (top item change)</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/File-monitoring-questions-top-item-change/m-p/72238#M14734</link>
      <description>&lt;P&gt;Thank you Answer by yannK.&lt;/P&gt;

&lt;P&gt;After all, Is there any way to avoid duplicate is collected?&lt;BR /&gt;
Whether the only way to solve collected through search results?&lt;/P&gt;

&lt;P&gt;Uniqueness, licensing issues&lt;/P&gt;</description>
      <pubDate>Wed, 26 Dec 2012 06:14:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/File-monitoring-questions-top-item-change/m-p/72238#M14734</guid>
      <dc:creator>jcisha</dc:creator>
      <dc:date>2012-12-26T06:14:18Z</dc:date>
    </item>
  </channel>
</rss>

