Hello,
I have a lookup table full of syslog hosts that are sending data to Splunk. My goal is to identify which syslog hosts have stopped sending data, so I can alert the appropriate folks. My lookup table has fields "host hostname src"; my syslog hosts are reporting host as an IP address. My search thus far (broken):
| inputlookup sysloglist | appendcols [ index=myindex sourcetype=syslog | stats count by host ] | fields host hostname count | fillnull value=0 count | sort + count
When this search runs, the intent is to serve the table, apply a search to get the event count, arrange the fields, fill null values where there are no event counts, then sort the 0's to the top (plus fancy color formatting). The issue I have encountered is that the event counts are wrong. For example, I have a syslog host called "print01" which by the above search says it has 0 events (last 60 minutes). When I run this search:
index=myindex sourcetype=syslog host=print01 | stats count
I have over 100 events, which means that the logic in my first search is incorrect. Yes, I have verified the search time frame is the same. I have tried running the search in reverse (running the count, then appendcols the table), but that does not work. I'm not sure if its a table sort issue in my csv file or if I am just missing something ridiculously obvious.
Any help is appreciated, thanks!
Try something like this (since you're checking just the event count, will use tstats command which is way faster than regular search)
| tstats count WHERE index=myindex sourcetype=syslog by host
| append [ | inputlookup sysloglist | table host | eval count=0]
| stats max(count) as count by host
| sort count
That makes it a lot faster than before. However, I forgot to mention that I additionally need the hostname field displayed. As soon as I add that field into the search (anywhere), it breaks consistently with what I have done above.