Splunk Search

How to do a host list lookup in a Splunk dashboard, including hosts with no hits?

hettervik
Builder

Hi,

I've been asked to make dashboard where one can search for a list of hosts, and get an output with all the hosts in the input list and when they were last seen in the logs. First of all, is there a good way to take take a list of hosts as an input to a dashboard? One (non pretty) way to make it work is to make the users input the list in the form of "host01|host02|host03", in which case I can use a search like the one below. Any better ideas?

index=windows sourcetype=dhcp_win
| where match(dest_dns, "host01|host02|host03")
| stats latest(_time) as latest by dest_dns
| sort - latest
| eval latest=strftime(latest, "%d/%m/%y %T")

Also, in the search above, if all hosts are found in the dhcp logs, I'll get a table with three rows, one for each host. I want to make it so that if one or more of the hosts are not found, they'll still show in the table, but have a field "latest" that is "not found", or something like that. Point being that I'll be able to see the hosts not found without having to manually cross reference my output with my input. Any ideas how to do this as well?

Any tips is greatly appreciated, thanks!

0 Karma
1 Solution

hettervik
Builder

I found something that worked.

index=windows sourcetype=dhcp_win
[| makeresults | eval assets="$assets_token$" | eval assets="dest_dns=" + replace(assets, ",\s?", "* OR dest_dns=") + "*" | return $assets]
| append [| makeresults | eval dest_dns=split("$assets_token$",",") | eval _time=0]
| stats latest(_time) as latestepoch by dest_dns
| sort - latestepoch
| eval latest=strftime(latestepoch, "%d/%m/%y %T")
| eval latest=if(latestepoch=0, "no found", latest)
| table dest_dns, latest
| rename dest_dns as DNS, latest as Latest

It allows the user to enter a comma separated list of host as an input. The search changes the commas to logical ORs, and in addition, adds one dummy event with a multiple value host field, containing one value for each host. This dummy event has epoch time 0. If for each host I don't find any events with epoch time greater than 0, the event is missing, and I can write it as "not found."

View solution in original post

hettervik
Builder

I found something that worked.

index=windows sourcetype=dhcp_win
[| makeresults | eval assets="$assets_token$" | eval assets="dest_dns=" + replace(assets, ",\s?", "* OR dest_dns=") + "*" | return $assets]
| append [| makeresults | eval dest_dns=split("$assets_token$",",") | eval _time=0]
| stats latest(_time) as latestepoch by dest_dns
| sort - latestepoch
| eval latest=strftime(latestepoch, "%d/%m/%y %T")
| eval latest=if(latestepoch=0, "no found", latest)
| table dest_dns, latest
| rename dest_dns as DNS, latest as Latest

It allows the user to enter a comma separated list of host as an input. The search changes the commas to logical ORs, and in addition, adds one dummy event with a multiple value host field, containing one value for each host. This dummy event has epoch time 0. If for each host I don't find any events with epoch time greater than 0, the event is missing, and I can write it as "not found."

gcusello
SplunkTrust
SplunkTrust

Hi hettervi,
you could create a lookup with all the hosts in your perimeter.
You can manage this list manually or using a scheduled search (e.g. one time a day or every hour, ... but not so frequently because it's an heavy search), you can see this in DMC App Forwarder Management.
using this list you can check frequently (e.g. every five minutes) if all the perimeter hosts are connected and sending logs, try something like this:

| metasearch index=_internal sourcetype=splunkd earliest=-300s latest=now
| eval host=upper(host)
| stats count by host
| append [ | inputlookup perimeter.csv | eval host=upper(host), count=0 | fields  host count ]
| stats sum(count) AS Total by host

hosts where Total>0 are sending logs,
hosts with Total=0 are missing in the last five minutes.
Adding at the end of the search | where Total=0, you can also create an alert to run every five minutes.

The problem is to have the latest connection because you must define a time period for your search and an host could be missing before it, but if you use a larger time period your search could be too slow.

I suggest to use two panels: one to find the missed hosts and another one to know the last connection.
To do this, you could create a drilldown in another panel of the same dashboard or another dashboard.
To know how to drilldown see the Splunk 6.0 Dashboard Examples App.
In the second panel, you should try something like this to have the latest connection:

| metasearch index=_internal sourcetype=splunkd earliest=-mon latest=now host=$host$
| stats latest(_time) AS latest by host
| eval latest=if(isnull(latest),"no logs in last month",strftime(latest,"%d/%m/%Y %H.%M.%S"))

You could also collapse the two searches in one without drilldown, but the final search will be very slow.

| metasearch index=_internal sourcetype=splunkd earliest=-mon latest=now 
[ | metasearch index=_internal sourcetype=splunkd earliest=-300s latest=now
  | eval host=upper(host)
  | stats count by host
  | append [ | inputlookup perimeter.csv | eval host=upper(host), count=0 | fields  host count ]
  | stats sum(count) AS Total by host
  | where Total=0
  | fields host
  ]
| stats latest(_time) AS latest by host
| eval latest=if(isnull(latest),"no logs in last month",strftime(latest,"%d/%m/%Y %H.%M.%S"))

Bye.
Giuseppe

0 Karma

hettervik
Builder

Hi Giuseppe,

Thank you very much for your answer. Unfortunately, the host I'm looking for does not exist as indexed fields, but only as search time field extractions, so I can't use metadata searches, though that would have been very nice.

Also, the thought of using a lookup has occurred to me, but I was hoping to have the dashboard work more dynamic. That is, make it so that users can input whatever list of hosts they want, in one way or another.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi hettervi,
why you're sayng the "the host I'm looking for does not exist as indexed fields, but only as search time field extractions"?
host is a default field present in every log, if it's not significant, you should re-design the log ingestion and the assignment of the host field (see https://docs.splunk.com/Documentation/Splunk/latest/Data/Overridedefaulthostassignments ).

The problem is to have a "dashboard work more dynamic", because you will surely have many events so searches will be slower.
You could accelerate your searches, but you pay this acceletarion with near real time answers, in other words, you could schedule last search to run every hour (if an hous is suffient to complete job) and see results updated at most to one hour ago.
To do this:

  • verify how much time is needed to complete the job,
  • use the last search to create a report,
  • schedule it related to the found max execution time,
  • save the resport in a dashboard AS a report (not as a search).

Bye.
Giuseppe

0 Karma

hettervik
Builder

You are right, the logs should be re-designed regarding the host field. As of now, depending on the logs, the host could be e.g. a proxy, while I'm looking for the machine that generated the traffic that created the event in the proxy. That being said, I'm not in charge of the infrastructure at this site, only frontend, so getting changes like this done could take time.

0 Karma

gcusello
SplunkTrust
SplunkTrust

It's easier to do than you can think.you have to intervene only on indezers.
Bye.
Giuseppe

0 Karma
Get Updates on the Splunk Community!

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...

Explore the Latest Educational Offerings from Splunk (November Releases)

At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of ...

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...