Splunk Search

How do you make a query to see logs not sent by a forwarder?

wvalente
Explorer

Guys,

I need to see which forwarders do not send events in a period of 3 hours.

For example: if a forwarder does not send logs, or does not connect with an indexer, in the last 3 hours, I need to create an alert.

I'm using the following searches, but I am not able to generate a search to compare the time.

| metadata type=sourcetypes index=XXX| rename totalCount as Count firstTime as "First Event" lastTime as "Last Event" recentTime as "Last Update" | fieldformat Count=tostring(Count, "commas") | fieldformat "First Event"=strftime('First Event', "%c") | fieldformat "Last Event"=strftime('Last Event', "%c") | fieldformat "Last Update"=strftime('Last Update', "%c")
|where "Last Time" > ???

| metadata type=hosts index=XXX | eval diff=now()-lastTime | where diff > 3600*24 | convert ctime(lastTime) as last_connected | eval not_reported_since=strftime(diff,"%T") | table host last_connected not_reported_since
| where last_connected > ????

Can you help me?

0 Karma
1 Solution

lakshman239
SplunkTrust
SplunkTrust

As @pkeenan87 says, if you could use the DMC, that would be ideal. However, if you need your search to work,

| metadata type=hosts index=* | eval max_allowed_delay_inhrs = 24 | eval age = round((now()-recentTime)/3600,2) | eval LastEvent=strftime(recentTime,"%+) | where age > max_allowed_delay_inhrs

You can change 24 hrs to anything you want. So any host not sending data for more than 24hrs will be alerted.

View solution in original post

0 Karma

lakshman239
SplunkTrust
SplunkTrust

As @pkeenan87 says, if you could use the DMC, that would be ideal. However, if you need your search to work,

| metadata type=hosts index=* | eval max_allowed_delay_inhrs = 24 | eval age = round((now()-recentTime)/3600,2) | eval LastEvent=strftime(recentTime,"%+) | where age > max_allowed_delay_inhrs

You can change 24 hrs to anything you want. So any host not sending data for more than 24hrs will be alerted.

0 Karma

pkeenan87
Communicator

Splunk provides a way to do this with the Distributed Monitoring Console (DMC):

  1. You will need to build the forwarders asset table: https://docs.splunk.com/Documentation/Splunk/7.2.4/DMC/Configureforwardermonitoring
  2. Enable the alert for "Missing Forwarders": https://docs.splunk.com/Documentation/Splunk/7.2.4/DMC/Platformalerts
0 Karma

JLederer
New Member

Hi wvalente,

if you want to know which host didn't send data to an index for 3 hours or more you can get a list using this query:

| metadata index="XXX" type="hosts"
| eval compareTime=relative_time(now(), "-3h")
| where lastTime <= compareTime
| convert ctime(lastTime), ctime(compareTime), ctime(recentTime), ctime(firstTime)

Explanation
1. Get host metadata from index 'XXX'
2. Calculate epoch timestamp for '3 hours ago'
3. Filter for results where the last event in index XXX is older than 3 hours
4. Make epoch timestamps human readable (optional)

But this will not help you completely, because it will also list hosts that are offline for months. My suggestion is to enhance filtering to a time window. I chose 5 minutes in my example.

This will filter for hosts that have sent data for at least 5 minutes but not for the last 3 hours. The size of this time window must be chosen by you, based upon your requirements. Means how regular do the hosts send data.

| metadata index="os" type="hosts"
| eval compareTime=relative_time(now(), "-3h"), filterOldHostsTime=relative_time(compareTime,"-5m")
| where lastTime <= compareTime AND lastTime >=filterOldHostsTime
| convert ctime(lastTime), ctime(compareTime), ctime(recentTime), ctime(firstTime)

I hope this helps you.

0 Karma

wvalente
Explorer

Yeah, the problem with this search is that return an hosts that are offline for months. Strange.

0 Karma

JLederer
New Member

Hi wvalente,

if you want to know which host didn't send data to an index for 3 hours or more you can get a list using this query:

| metadata index="XXX" type="hosts"
| eval compareTime=relative_time(now(), "-3h")
| where lastTime <= compareTime
| convert ctime(lastTime), ctime(compareTime), ctime(recentTime), ctime(firstTime)

Explanation
1. Get host metadata from index 'XXX'
2. Calculate epoch timestamp for '3 hours ago'
3. Filter for results where the last event in index XXX is older than 3 hours
4. Make epoch timestamps human readable (optional)

But this will not help you completely, because it will also list hosts that are offline for months. My suggestion is to enhance filtering to a time window. I chose 5 minutes in my example.

This will filter for hosts that have sent data for at least 5 minutes but not for the last 3 hours. The size of this time window must be chosen by you, based upon your requirements. Means how regular do the hosts send data.

| metadata index="os" type="hosts"
| eval compareTime=relative_time(now(), "-3h"), filterOldHostsTime=relative_time(compareTime,"-5m")
| where lastTime <= compareTime AND lastTime >=filterOldHostsTime
| convert ctime(lastTime), ctime(compareTime), ctime(recentTime), ctime(firstTime)

I hope this helps you.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...