<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: filtering on yesterdays date? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60034#M11886</link>
    <description>&lt;P&gt;No, the filters that can be used for filtering events before they go into the index are regex based and work event by event only, so there's no mechanism for looking for any duplicates in the index and filter based on that (which is wise, I imagine that would have a severe impact on performance...)&lt;/P&gt;</description>
    <pubDate>Fri, 25 May 2012 06:03:07 GMT</pubDate>
    <dc:creator>Ayn</dc:creator>
    <dc:date>2012-05-25T06:03:07Z</dc:date>
    <item>
      <title>filtering on yesterdays date?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60031#M11883</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;I have a csv file, in a nice format (see below). The data is for rolling 7/10/21 day reports, that customers control, and we want to import into Splunk. Since it's rolling data, a large chunk of this data is duplicate from previous days. All I want to process is "yesterday's data". Is there a way to look at the data, and filter out unnecessary data?&lt;/P&gt;

&lt;P&gt;Sample data:&lt;/P&gt;

&lt;P&gt;Timestamp=17-May-12 15:45:00,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=66&lt;BR /&gt;
Timestamp=17-May-12 15:46:06,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=3&lt;BR /&gt;
Timestamp=17-May-12 15:46:09,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=32&lt;BR /&gt;
Timestamp=17-May-12 15:46:41,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=30&lt;BR /&gt;
Timestamp=17-May-12 15:47:11,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=27&lt;BR /&gt;
Timestamp=17-May-12 15:47:38,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=65&lt;BR /&gt;
Timestamp=17-May-12 15:48:43,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=29.00000000,DURATION=31&lt;BR /&gt;
Timestamp=17-May-12 15:49:14,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=29&lt;BR /&gt;
Timestamp=17-May-12 15:49:43,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=33.00000000,DURATION=31&lt;BR /&gt;
Timestamp=17-May-12 15:50:14,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=30&lt;BR /&gt;
Timestamp=17-May-12 15:50:44,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=28.00000000,DURATION=65&lt;BR /&gt;
Timestamp=17-May-12 15:51:49,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=27.00000000,DURATION=2&lt;BR /&gt;
Timestamp=17-May-12 15:51:51,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=29.00000000,DURATION=32&lt;BR /&gt;
Timestamp=17-May-12 15:52:23,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=30&lt;BR /&gt;
Timestamp=17-May-12 15:52:53,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=27&lt;BR /&gt;
Timestamp=17-May-12 15:53:20,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=30&lt;BR /&gt;
Timestamp=17-May-12 15:53:50,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=34.00000000,DURATION=61&lt;BR /&gt;
Timestamp=17-May-12 15:54:51,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=33.00000000,DURATION=30&lt;BR /&gt;
Timestamp=17-May-12 15:55:21,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=29&lt;BR /&gt;
Timestamp=17-May-12 15:55:50,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=31&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 11:51:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60031#M11883</guid>
      <dc:creator>a212830</dc:creator>
      <dc:date>2020-09-28T11:51:56Z</dc:date>
    </item>
    <item>
      <title>Re: filtering on yesterdays date?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60032#M11884</link>
      <description>&lt;P&gt;I think you can just do this with the time manipulation.&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;... earliest = -1d@d latest = -0d@d | ...
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;&lt;A href="http://docs.splunk.com/Documentation/Splunk/latest/User/ChangeTheTimeRangeOfYourSearch"&gt;http://docs.splunk.com/Documentation/Splunk/latest/User/ChangeTheTimeRangeOfYourSearch&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 25 May 2012 00:12:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60032#M11884</guid>
      <dc:creator>sdaniels</dc:creator>
      <dc:date>2012-05-25T00:12:33Z</dc:date>
    </item>
    <item>
      <title>Re: filtering on yesterdays date?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60033#M11885</link>
      <description>&lt;P&gt;Not a search, I want to stop duplicate data from getting into the system.&lt;/P&gt;</description>
      <pubDate>Fri, 25 May 2012 00:14:11 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60033#M11885</guid>
      <dc:creator>a212830</dc:creator>
      <dc:date>2012-05-25T00:14:11Z</dc:date>
    </item>
    <item>
      <title>Re: filtering on yesterdays date?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60034#M11886</link>
      <description>&lt;P&gt;No, the filters that can be used for filtering events before they go into the index are regex based and work event by event only, so there's no mechanism for looking for any duplicates in the index and filter based on that (which is wise, I imagine that would have a severe impact on performance...)&lt;/P&gt;</description>
      <pubDate>Fri, 25 May 2012 06:03:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60034#M11886</guid>
      <dc:creator>Ayn</dc:creator>
      <dc:date>2012-05-25T06:03:07Z</dc:date>
    </item>
    <item>
      <title>Re: filtering on yesterdays date?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60035#M11887</link>
      <description>&lt;P&gt;Makes sense. Thanks. I'll try to pre-parse the data.&lt;/P&gt;</description>
      <pubDate>Fri, 25 May 2012 11:15:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/filtering-on-yesterdays-date/m-p/60035#M11887</guid>
      <dc:creator>a212830</dc:creator>
      <dc:date>2012-05-25T11:15:50Z</dc:date>
    </item>
  </channel>
</rss>

