I am not sure if this will work:
Can you try something like this:
index=ver_logs [search index=ver_logs | dedup _time | head 1 | return _time]
Let me know if this gives you result of only latest source.
@nmohammed have you tried the following:
index="ver_logs" "ERORR detected" | rex field=source "VerLogs\\\(?<cid>[^\_]+)\_" | dedup cid host | table cid host _time _raw
Added the rex to extract cid
it did not help. So, what I am trying to do is.. search for any errors in the latest log file produced by each "cid". Some "cid" don't have errors and some do. When we run a regular search it does return only those cids which have error.
"index="ver_logs" "ERORR detected" | rex field=source "VerLogs\(?.*?)_" | stats count by cid, host"
but we look at the list of all "cid" obtained from above search and fix them running an upgrade which produces another log file for that "cid" which does not have error. So, in order to validate we should only run searches against the newest log file produced by each cid for any error occurrence.
Thanks for your input.
@nmohammed did you try with your rex for finiding cid as well?
The dedup command actually retains only the latest row based on the dedup criteria. This is based on the fact that latest event will be displayed first in Splunk (reverse chronological order of time). In this case I assumed cid + host should give unique record and that you would be interested only in the latest.
When you say
it did not help, what is the output of query vs what is the expected output?
Also can you add sample data the scenario across multiple hosts where cid has error first and then it is fixed?
Are you trying something like this:
index="ver_logs" "ERROR detected" | rex field=source "VerLogs\(?.*?)_" | stats latest(cid) AS cid by host | stats count(cid) AS count values(cid) AS cid by host
Let me know if this helps!!
The data is indexed properly without any issues.
Example data -
[3/10/2018 11:32:34 PM] ERROR detected SQL migration failed, deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I will try to explain my use-case :
1.We've multiple servers and on each server there are logs produced by our app during upgrades.
2. Each Log is identified by unique cid_date_time.log
Sometimes during our app upgrades , we run into errors. So, a redeployment is neede which is usually in a very shortly after the first failure scan. After re-doing the upgrade, we again want to validate from the newest log file (source) for each cid. The "cid" is unique and I am extracting it during search-time using the "rex" command.
It was easy to run a query and look for errors, but it scans all the logs available. Unfortunately, we only to validate if there are any errors from the newest log file produced by a cid.
In the above 4 log files; we saw error in the first log (D:\Logs\VerLogs\fe1234_3122018_191020.log), then after re-deployment there were no errors in the second log file (D:\Logs\VerLogs\fe1234_3122018_231020.log). These times may change and for some "cid" there may not be any errors as first attempt to upgrade can be successful.
By the latest log file, do you mean the most recent log file?
Splunk data is indexed based on time series data. This means that as you search data, and as long as you have it indexed based on the proper event time stamps, then the results you get will be displayed in the time from latest to oldest. (In your results view.)
Drilling down a bit more without understanding your use case fully, you can also you the latest Event Order command in your stats pipeline in order to get the latest events based on the fields of your choosing : http://docs.splunk.com/Documentation/Splunk/7.0.2/SearchReference/Eventorderfunctions
Those are a few options there...
Tried using latest; but not able to get the errors from the latest log file produced by unique cid extracted using the rex at search time. I have put my use-case in more detail in the comments to deepashri_123.