Getting Data In

filtering on yesterdays date?

a212830
Champion

Hi,

I have a csv file, in a nice format (see below). The data is for rolling 7/10/21 day reports, that customers control, and we want to import into Splunk. Since it's rolling data, a large chunk of this data is duplicate from previous days. All I want to process is "yesterday's data". Is there a way to look at the data, and filter out unnecessary data?

Sample data:

Timestamp=17-May-12 15:45:00,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=66
Timestamp=17-May-12 15:46:06,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=3
Timestamp=17-May-12 15:46:09,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=32
Timestamp=17-May-12 15:46:41,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=30
Timestamp=17-May-12 15:47:11,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=27
Timestamp=17-May-12 15:47:38,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=65
Timestamp=17-May-12 15:48:43,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=29.00000000,DURATION=31
Timestamp=17-May-12 15:49:14,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=29
Timestamp=17-May-12 15:49:43,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=33.00000000,DURATION=31
Timestamp=17-May-12 15:50:14,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=30
Timestamp=17-May-12 15:50:44,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=28.00000000,DURATION=65
Timestamp=17-May-12 15:51:49,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=27.00000000,DURATION=2
Timestamp=17-May-12 15:51:51,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=29.00000000,DURATION=32
Timestamp=17-May-12 15:52:23,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=30
Timestamp=17-May-12 15:52:53,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=27
Timestamp=17-May-12 15:53:20,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=30
Timestamp=17-May-12 15:53:50,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=34.00000000,DURATION=61
Timestamp=17-May-12 15:54:51,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=33.00000000,DURATION=30
Timestamp=17-May-12 15:55:21,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=29
Timestamp=17-May-12 15:55:50,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=31

Tags (2)
1 Solution

Ayn
Legend

No, the filters that can be used for filtering events before they go into the index are regex based and work event by event only, so there's no mechanism for looking for any duplicates in the index and filter based on that (which is wise, I imagine that would have a severe impact on performance...)

View solution in original post

Ayn
Legend

No, the filters that can be used for filtering events before they go into the index are regex based and work event by event only, so there's no mechanism for looking for any duplicates in the index and filter based on that (which is wise, I imagine that would have a severe impact on performance...)

a212830
Champion

Makes sense. Thanks. I'll try to pre-parse the data.

0 Karma

sdaniels
Splunk Employee
Splunk Employee

I think you can just do this with the time manipulation.

... earliest = -1d@d latest = -0d@d | ...

http://docs.splunk.com/Documentation/Splunk/latest/User/ChangeTheTimeRangeOfYourSearch

a212830
Champion

Not a search, I want to stop duplicate data from getting into the system.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...