- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello I need help. I am looking for a SPL query or app that visualizes log sources not sending logs to the SIEM within in 24 hours. Can anyone assist?
Thank you in advance!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
index="yourIndex" sourcetype="yourSourcetype" earliest=-30m latest=now | stats count
This search will check your index and sourcetype for the last 30 minutes until now, current time. When you create the alert you will just need to ensure that the number of results is set to less than 1.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear all,
it seems like I am not the only one with the same issue, currently I am trying to get an alert for all our critical hosts and sourcetypes with definded alerttimes.
I created a lookup table with the following information:
critical_host, sourcetype, alerttime
What I currently achived is to compare the lookuptable with my critical hosts.
I have two Problems which I couldn't solve yet and maybe someone of you have a hint/solution:
1. I have WinEventlogs where Source & Sourcetype is reversed, so I something like: if source=XmlWinEventLog then source=sourcetype
2. I didn't get it running if I compared the age with alerttime (so currently it is just 6 in the search)
Here is my search:
| tstats count as countAtToday latest(_time) as lastTime where index!="*_" by sourcetype source host
| lookup critical_hosts.csv host Outputnew host as critical_host
| where isnotnull(critical_host)
| eval age=now()-lastTime
| sort age d
| fieldformat lastTime=strftime(lastTime,"%Y/%m/%d %H:%M:%S")
| eval age=round((age/60/60),1)
| search age>=6
| eval age=age."hour"
| dedup host
| fields critical_host, lastTime, sourcetype, source
Best Regards,
Hammy
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Hi @hammy,
it isn't a good idea adding a new question to an existing and solved question, even if with the same issue, because probably you'll never have an answer; so, I hint to create a new question with your requirement.
Anyway, your questions aren't clear for me:
- what's the problem to use source or sourcetype? you can easily rename one of them using AS, e.g. the one in the lookup.
- What do you mean with alerttime: the timestamp of the event or what else?
Ciao.
Giuseppe
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all I have a new update and solution. It's called the Meta Woot! app. It's pretty awesome and is just what I was looking for.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Hi @blasmoreno ,
all the solutions from the other people are correct, I give you another one:
you should create a lookup containing all the server to monitor (you should always have it to avoid to loose the control!), called e.g. perimeter.csv, containing at least one field (host) and eventually also other fields for your use, in few words, you asset inventory.
Then you have to run a simple search like this:
| metasearch index=_internal
| eval host=lower(host)
| stats count BY host
| append [ | inputlookup perimeter.csv | eval host=lower(host), count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0
This search is ok to find missing servers.
If instaed you want also to check other sources, you have to modify your search using the same approach.
my hint is to schedule this search every 5 minute to immediately have an alert when an host is missiong, because in this case you're blind!
Ciao.
Giuseppe
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you kindly!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Hi @blasmoreno,
good for you, see next time!
Please accept one answer for the other people of Community
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated by all the Contributors 😉
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

This is of course another approach to the "business problem" of monitoring, not the "technical problem" presented by OP.
The problem is that you have to have the list.
You could do another scheduled search which would automatically add any encountered hosts/sources to a lookup (which would effectively work like forwarder monitoring in MC)
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
index="yourIndex" sourcetype="yourSourcetype" earliest=-30m latest=now | stats count
This search will check your index and sourcetype for the last 30 minutes until now, current time. When you create the alert you will just need to ensure that the number of results is set to less than 1.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
index="yourIndex" sourcetype="yourSourcetype" | stats count
You could also remove the earliest and the latest and set your time frame to 24 hours.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

That's the most straightforward but also one of the worst possible solutions for counting events. If you want to simply count the events (and possibly filter and/or group it by indexed field), never use search and stats. Use tstats. It's several orders of magnitude quicker.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Let me re-phrase my question. I need to know when a log source stops sending logs to the SIEM (within an hour time phase). I felt if I had a dashboard/visualization I could quickly glance at it and notice if anything was awry. OR get some type of email or text alert would suffice.
- Any ideas from the Splunk Community?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

That could be much easier done.
Just do a stats count (or even better - tstats) by source (but are you sure you don't want host, not source?) binning it to hour-long periods. And you're done - just find zeros.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Any chance you can share the SPL query with me so I can try it out? Then create a dashboard with visualization. I was planning to monitor the dozen most important sourcetypes.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

But the question as I said before is whether you want it by source or by host. Grouping by source has some caveats with many file-based sources whereas single host can produce many differend kinds of events so both methods have their limitations.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would answer source as we will have numerous hosts in a group.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

But remember that source field for some types of sources (most obvious one is Exchange message tracking logs) can change for the same "logical source" so you're getting a new filename in source every hour or even more often.
Anyway, you can tweak it later - aggregate some entries or whatever.
The idea is relatively simple. You do
| tstats count where index=* AND (your additional conditions for sourcetypes) by source _time span=1h
Now you need a neat trick to find missing entries (hours for which there were no results).
Convert it to a table
| xyseries source _time count
And fill in the blanks
| fillnull value=0
Now all you need to do is "unpack" the table back into single time-source pairs with count
| untable source _time count
And now you can easily get only the rows...
| where count=0
One limitation of this method is that you will not get results for a source which didn't produce any results at all throughout the whole day.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Splunk cannot find something that isn't there. You'd have to have a list of sources to compare with.
