Splunk Search

How to find computers which stopped sending logs

Nawab
Communicator

I have a deployment where multiple computers are sending logs to a WEF server using WEF(windows event forwarding). I tried to map ComputerName field to host name field but failed to do so.

Now I want to create an alert if any of the computer is not sending logs to splunk. how can i do so.

 

The method defined by splunk is based on index,host and sourcectype field, which will remain same for all computers in our case.

Labels (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @Nawab ,

to use computername instead host youcannot use tstats and the search is slower, so try this:

with perimeter.csv lookup

index=* 
| stats count BY sourcetype ComputerName
| append [ 
     | inputlookup perimeter.csv 
     | eval count=0 
     | fields ComputerName sourcetype count ]
| stats sum(count) AS total BY sourcetype ComputerName
| where total=0

without lookup:

index=* 
| stats count latest(_time) AS _time BY sourcetype ComputerName
| eval period=if(_time<now()-3600,"previous,"latest")
| stats 
     dc(period) AS period_count
     values(period) AS period
     BY sourcetype ComputerName
| where period_count=1 AND period="previous"

Ciao.

Giuseppe

View solution in original post

0 Karma

Nawab
Communicator

The issue in my case is the field i am look at is computername instead of host.

below is the deployement.

 

All windows servers ----> forwarder server ----> splunk

in splunk host will be forwarder server i.e 1 instead of the backend servers sending data.

these queries work on host source sourcetype and index fields.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Nawab ,

to use computername instead host youcannot use tstats and the search is slower, so try this:

with perimeter.csv lookup

index=* 
| stats count BY sourcetype ComputerName
| append [ 
     | inputlookup perimeter.csv 
     | eval count=0 
     | fields ComputerName sourcetype count ]
| stats sum(count) AS total BY sourcetype ComputerName
| where total=0

without lookup:

index=* 
| stats count latest(_time) AS _time BY sourcetype ComputerName
| eval period=if(_time<now()-3600,"previous,"latest")
| stats 
     dc(period) AS period_count
     values(period) AS period
     BY sourcetype ComputerName
| where period_count=1 AND period="previous"

Ciao.

Giuseppe

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Nawab ,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated by all the contributors 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Nawab ,

if you have a list of hosts to monitor, you could put it in a lookup (called e.g. perimeter.csv and containing at least two columns: sourcetype, host) and run a search like the following:

| tstats 
     count 
     WHERE index=* 
     BY sourcetype host
| append [ 
     | inputlookup perimeter.csv 
     | eval count=0 
     | fields host sourcetype count ]
| stats sum(count) AS total BY sourcetype host
| where total=0

if you don't have this list and you want to check hosts that sent logs in the last weeb but not in tha last hour, you could run:

| tstats 
     count 
     latest(-time) AS _time
     WHERE index=* 
     BY sourcetype host
| eval period=if(_time<now()-3600,"previous,"latest")
| stats 
     dc(period) AS period_count
     values(period) AS period
     BY sourcetype host
| where period_count=1 AND period="previous"

 The first solution gives you more control but requires to manage the perimeter lookup.

Ciao.

Giuseppe

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...