Getting Data In

filtering on yesterdays date?

a212830
Champion

Hi,

I have a csv file, in a nice format (see below). The data is for rolling 7/10/21 day reports, that customers control, and we want to import into Splunk. Since it's rolling data, a large chunk of this data is duplicate from previous days. All I want to process is "yesterday's data". Is there a way to look at the data, and filter out unnecessary data?

Sample data:

Timestamp=17-May-12 15:45:00,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=66
Timestamp=17-May-12 15:46:06,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=3
Timestamp=17-May-12 15:46:09,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=32
Timestamp=17-May-12 15:46:41,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=30
Timestamp=17-May-12 15:47:11,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=27
Timestamp=17-May-12 15:47:38,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=65
Timestamp=17-May-12 15:48:43,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=29.00000000,DURATION=31
Timestamp=17-May-12 15:49:14,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=29
Timestamp=17-May-12 15:49:43,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=33.00000000,DURATION=31
Timestamp=17-May-12 15:50:14,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=32.00000000,DURATION=30
Timestamp=17-May-12 15:50:44,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=28.00000000,DURATION=65
Timestamp=17-May-12 15:51:49,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=27.00000000,DURATION=2
Timestamp=17-May-12 15:51:51,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=29.00000000,DURATION=32
Timestamp=17-May-12 15:52:23,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=30
Timestamp=17-May-12 15:52:53,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=27
Timestamp=17-May-12 15:53:20,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=30.00000000,DURATION=30
Timestamp=17-May-12 15:53:50,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=34.00000000,DURATION=61
Timestamp=17-May-12 15:54:51,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=33.00000000,DURATION=30
Timestamp=17-May-12 15:55:21,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=29
Timestamp=17-May-12 15:55:50,host=APF-US211i-RH-Cpu-0,metric=CPU_Utilization,value=31.00000000,DURATION=31

Tags (2)
1 Solution

Ayn
Legend

No, the filters that can be used for filtering events before they go into the index are regex based and work event by event only, so there's no mechanism for looking for any duplicates in the index and filter based on that (which is wise, I imagine that would have a severe impact on performance...)

View solution in original post

Ayn
Legend

No, the filters that can be used for filtering events before they go into the index are regex based and work event by event only, so there's no mechanism for looking for any duplicates in the index and filter based on that (which is wise, I imagine that would have a severe impact on performance...)

a212830
Champion

Makes sense. Thanks. I'll try to pre-parse the data.

0 Karma

sdaniels
Splunk Employee
Splunk Employee

I think you can just do this with the time manipulation.

... earliest = -1d@d latest = -0d@d | ...

http://docs.splunk.com/Documentation/Splunk/latest/User/ChangeTheTimeRangeOfYourSearch

a212830
Champion

Not a search, I want to stop duplicate data from getting into the system.

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...