Hello All,
I have a lookup that is a saved as a schedule report that runs once a week. This schedule report will get the new email addresses that were populated upon the search, then write the new email addresses to another lookup. The issue I have is that I get duplicates as this search runs once a week. Is there a way I can avoid duplicates using outputlookup? Dedup is not doing the trick...
| inputlookup Stored_Email_lookups.csv
| table Email, User_Id | rename User_Id as "New User" | dedup Email | outputlookup append=true "New_Incoming_Emails.csv"
Hi @MeMilo09,
you have to filter the result for lookup updating using the lookup itself.
So, if you take the informations Email User_Id from thevents of an index, you could run something like this:
index=your_index NOT [ | inputlookup Stored_Email_lookups.csv | fields Email User_Id ]
| dedup Email User_Id
| table Email User_Id
| outputlookup append=true Stored_Email_lookups.csv
Ciao.
Giuseppe
This is how I would normally do it. It avoids doing a subsearch.
index=your_index
| append [| inputlookup Stored_Email_lookups.csv]
| dedup Email User_Id
| table Email User_Id
| outputlookup Stored_Email_lookups.csv
Hi @johnhuang,
you are using a sub search as mine!
anyway, it's another similar solution: you rebuild every time the full lookup.
Ciao.
Giuseppe
Hi @MeMilo09,
you have to filter the result for lookup updating using the lookup itself.
So, if you take the informations Email User_Id from thevents of an index, you could run something like this:
index=your_index NOT [ | inputlookup Stored_Email_lookups.csv | fields Email User_Id ]
| dedup Email User_Id
| table Email User_Id
| outputlookup append=true Stored_Email_lookups.csv
Ciao.
Giuseppe
Hi @MeMilo09,
good for your, see next time!
Ciao and happy splunking.
Giuseppe
P.S.:. Karma Points are appreciated 😉