Alerting

How to detect missing logs from any host domain joined?

corti77
Communicator

Hi,

By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type of "issue" in the future, I would like to have some alerts in splunk informing me whenever any host, initially domain joined workstations and servers, might have stopped reporting events.

Did someone implement something like this already? any good article to follow?

I plan to create a lookup table using ldapsearch and then, an alert detecting which hosts from that table are not present in a basic listing host search on sysmon index for example.

Any other better or simpler approach?

 

many thanks

Labels (1)
Tags (3)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @corti77,

if you have a list of monitored hosts you could run a search like the following:

| tstats count WHERE index=* BY host
| append [ | inputlookup perimeter.csv | eval count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

if you don't have this list, you can identify the hosts that sent logs in the last 30 days and not in the last hour:

| tstats count latest(_time) AS _time WHERE index=* BY host
| eval period=if(_time>now()-3600,"last_Hour","Previous")
| stats dc(period) AS period_count values(period) AS period BY host
| where period_count=1 AND period="Previous"

Ciao.

Giuseppe

View solution in original post

corti77
Communicator

Thanks a lot @gcusello !

I just created a search to create that CSV used in your query.

| ldapsearch domain=default search="(objectClass=computer)" 
| table name 
| rename name as host 
| outputlookup append=false monitored_hosts.csv

and I run your query using the monitored_hosts.csv.
It works flawless! thanks once again.

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @corti77,

if you have a list of monitored hosts you could run a search like the following:

| tstats count WHERE index=* BY host
| append [ | inputlookup perimeter.csv | eval count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

if you don't have this list, you can identify the hosts that sent logs in the last 30 days and not in the last hour:

| tstats count latest(_time) AS _time WHERE index=* BY host
| eval period=if(_time>now()-3600,"last_Hour","Previous")
| stats dc(period) AS period_count values(period) AS period BY host
| where period_count=1 AND period="Previous"

Ciao.

Giuseppe

Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...