I completely refresh my outputlookup everyday using the search below. I have scheduled this outputlookup as cron job.
index = ind sourcetype= todaydata earliest = -1d@d latest = now | my search ...|table one two |outputlookup look.csv
But in case the data is blank for some reason in last one day, the search currently updates the table results to zero.
I do not want to append the data. I want to refresh it everyday (deleting the previous one and inserting the new one), but I want to put in a condition while inserting that if the number of today's results is zero, DO NOT overwrite the already existing table.
UPDATE: Splunk 7.1 has added a new outputlookup parameter "override_if_empty=". Set it to "false" to keep the lookup file if search results are empty. Note the default is 'true', the the behaviour is the same as before unless you tell it otherwise.
Pre-7.1 Method:
Here's an idea to try and deal with this issue:
Get the new data and apply a simple identifying field ('new' in this example) then append the existing lookup data with a different identifying field ('old' in the below example)
Use eventstats across all of the results to count the results for each identifying field.
Finally use a where clause to drop the old data if new data exists or keep the old data if there are no new results.
<search for new data>...
| eval new=1
| append [|inputlookup <lookup>
| eval old=1]
| eventstats count(old) as existing_count count(new) AS update_count
| where (update_count>0 AND new=1) OR (isnull(update_count) AND old=1)
| fields - update_count existing_count old new
| outputlookup <lookup>
i was searching for the similar issue and after reading this solution i assumed the current Splunk version also works similar to this solution. Then i was reading the docs and learnt the new options available, so i thought to update other readers who may face similar situation like myself.
Until Splunk 6.5, the scheduled reports never had "Write to a CSV lookup file" option. Ref:
https://docs.splunk.com/Documentation/Splunk/6.5.0/Report/Schedulereports
From Splunk 6.6, the scheduled reports are having a "Write to a CSV lookup file" option. Ref:
so, if we are using Splunk 6.6 or newer versions, then, simply with scheduled reports, we can "append" the lookup files. thanks.
(PS - i have given around 500+ karma points so far, received badge for that, if an answer helped you, a karma point would be nice!. we all should start "Learn, Give Back, Have Fun")
Hi @ishugupta
Just wanted to follow up with this post, but did @mbenwell's answer below solve your question by chance?
UPDATE: Splunk 7.1 has added a new outputlookup parameter "override_if_empty=". Set it to "false" to keep the lookup file if search results are empty. Note the default is 'true', the the behaviour is the same as before unless you tell it otherwise.
Pre-7.1 Method:
Here's an idea to try and deal with this issue:
Get the new data and apply a simple identifying field ('new' in this example) then append the existing lookup data with a different identifying field ('old' in the below example)
Use eventstats across all of the results to count the results for each identifying field.
Finally use a where clause to drop the old data if new data exists or keep the old data if there are no new results.
<search for new data>...
| eval new=1
| append [|inputlookup <lookup>
| eval old=1]
| eventstats count(old) as existing_count count(new) AS update_count
| where (update_count>0 AND new=1) OR (isnull(update_count) AND old=1)
| fields - update_count existing_count old new
| outputlookup <lookup>
Thanks mbenwell. You have my vote.
Your solution works well even in 2018 (as we still use Splunk 6.x).
makes sense, works fine.
Thanks mbenwell. You have my vote.
Your solution works well even in 2018 (as we still use Splunk 6.x).
I am assuming if your base search (index = ind sourcetype= todaydata earliest = -1d@d latest = now
) is NOT showing any data, you want your lookup to remain same as yesterday's version and overwrite if there are any data available, then try this
index = ind sourcetype= todaydata earliest = -1d@d latest = now | head 1 | map search="search index = ind sourcetype= todaydata earliest = -1d@d latest = now | my search ...|table one two |outputlookup look.csv"
It will only execute the search mentioned in map command if earliest portion of the query returning 1 row and will skip the execution otherwise.
Thanks Somesoni , It is not working for me . The command within map search=" search ..." is providing me zero results .
Whenver I table the results in the internal query , they provide me zero results.