Hi -
Let's say you have a scheduled query / report that runs daily (at mid-night) looking over a time range of Last 24 hours. And you summarize the results to index=summary_index_foo.
There was a "foo" data source outage for a couple days, however you were able to backfill the data to index=foo.
What is the best to re-run the query without creating a lot of duplicates. I am pretty sure if you use "collect" that will create duplicates.
But will re-scheduling a one-time clone of the report over the outage days and summarizing results create duplicates if the time range overlaps into the data (before and after the outage)?
In other words, the outage time frame was not to the minute, hour, or day exactly. When you re-schedule/re-summarize the query will that create duplicates if the same data/event exists in the summary index for that time?
Or will Splunk drop duplicates when using the summary index? I am guessing duplicates will still be created but need a sanity check.
Thank you
@Glasses Do you want to backfill summary Index ? Here you go https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Managesummaryindexgapsandoverlaps...
@Glasses Do you want to backfill summary Index ? Here you go https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Managesummaryindexgapsandoverlaps...
TY, for the reply, I will try it.