you could run a search like this:
index=Serverlogs1 "error message" earliest=-7d@d latest=now | eval status=if(_time<now()-86400,"Old","New") | search status="New" | table _time host
But, if you have many logs, this search could be very slow; so you could schedule the main search and save results in a Summary Index, then run this search on the Summary index instead of the Serverlogs1 index.
If you have many error messages to search, you could insert them in a lookup and use it for the search.
Thanks for the reply. I
tried the above query, but for an error , for example "error1" which occurred within last 7 days and also today, query should not show those results, as it is not a new error. However, the above query is showing those results.
Your requirement is still unclear.
If an error happens twice "today", is that a new error?
By "today", do you mean with the same date as now or the last 24 hours or since say 6pm yesterday?
If an error first happens just before midnight and you next run your query just after midnight, are you expecting to pick this up even though it happened yesterday?
Thanks for the reply!
By Today I mean in the last 24 hours. An error is considered a "new" error if was not present in the past 7 days of logs. If an error happens twice "today", alert should be triggered for the first occurrence of the error.
Assuming you have your errors extracted in an error field and you execute your query over the previous 7 days, then:
... base search | stats last(_time) as _time by error | where _time > now()-(60*60*24)
the main problem is to identify errors: e.g. if you's monitoring an Oracle database, errora have the format ORA-XXXX so it's easy to identify them.
So you should know how identify errors, for format (as Oracle errors) or for position (e.g. after a word or at the third position).
when it's clear how to identify error messages, you can use my search to define if it's a new erro or not, but as I said, the problem isn't in Splunk search, the main problem is in the knowledge of the technology you're monitoring.
usually 70% of the job is to know what to search and 30% it to search in Splunk!