Getting Data In

How do I prevent duplicate events in the summary using a search with tscollect?

gcusello
SplunkTrust
SplunkTrust

Hi at all,

I'm using the BlueCoat App: this App uses tscollect to accelerate searches.
My problem is that I haven't a continuous stream of logs from BlueCoat, but the logs are loaded into a directory every hour and then taken by the Universal Forwarder.
So there isn't a fixed interval to run the search using tscollect, and if I extend the time range, some data in the summary are duplicated.

Is there a method to not re-acquire data already acquired in the summary?
Thank you in advance.
Bye.
Giuseppe

1 Solution

gcusello
SplunkTrust
SplunkTrust

The only way is to collect in the tscollect also the source field and exclude in the search the fields already summarizeds

View solution in original post

gcusello
SplunkTrust
SplunkTrust

The only way is to collect in the tscollect also the source field and exclude in the search the fields already summarizeds

gcusello
SplunkTrust
SplunkTrust

to workaround the problem, I scheduled the tscollect search in a fixed time (from -120m to -60m) in this way probably I havent duplications.

0 Karma
Get Updates on the Splunk Community!

Now Available: Cisco Talos Threat Intelligence Integrations for Splunk Security Cloud ...

At .conf24, we shared that we were in the process of integrating Cisco Talos threat intelligence into Splunk ...

Preparing your Splunk Environment for OpenSSL3

The Splunk platform will transition to OpenSSL version 3 in a future release. Actions are required to prepare ...

Easily Improve Agent Saturation with the Splunk Add-on for OpenTelemetry Collector

Agent Saturation What and Whys In application performance monitoring, saturation is defined as the total load ...