Splunk Search

How to detect a new host used by a user?

desmondpigott
Explorer

Summary: We want to trigger an alert/email when a user logs on to a new system for the first time.

Event ID 4624 is collected from domain controllers. This event basically contains what we need (host, username, logontype). In our case we only care about logontype="7" (interactive logon) for now.

As new events arrive, how can I compare the username + hostname + logontype, to previously indexed events, in order to detect new username + hostname combinations?

Something like: Every 15 minutes, fetch all the 4624 events that arrived, extract the hostname & username, then search all previous results for $username$ $hostname$ logontype="7". If there are no results, it must be a new system, then trigger an alert to $username$@domain.com.

Another possible solution: As matching events arrive (EventID="4624" logontype="7"), extract the username and hostname, and then query all previous data for a match using all 4 attributes, if null, then it must be a new system, so trigger an alert.

According to bullet #4 in this doc: http://docs.splunk.com/Documentation/ES/4.2.0/User/Applicationprotocolsblacklist#Search-driven_looku... it may be possible to detect new hosts. But that seems overkill and expensive. There must be a native way to do this in Splunk core. Are search-driven lookup's available without Enterprise Security?

Open to other ideas about how to achieve this.

Tags (4)
0 Karma
1 Solution

somesoni2
SplunkTrust
SplunkTrust

Typical way of achieving this would to maintain a lookup table with list of all users and the hostname they have used (in the past). Have this lookup updated by a scheduled saved search which lookup for new logon and keep updating the lookup. Once that is done and available, you can setup your alert to find user logon and compare it against the lookup data. Alert when no match found.

Example saved search for lookup - user_host_lookup.csv

index=foo sourcetype=bar EventID="4624" logontype="7" | stats count by username , hostname | table username , hostname | inputlookup append=t user_host_lookup.csv  | stats count by username , hostname | table username , hostname | eval flag="y" | outputlookup user_host_lookup.csv

Use appropriate time range and cron for this search to avoid overlapping. ( earliest= -20m@m , latest= -5m@m , cron= 5-59/15 * * * *)

You'd need to backfill this search for historical data. You can utilize Summary Index Backfill method for this.

Your alert search can be now this. This will give all the username-hostname combination which are not in lookup already (new combination).

index=foo sourcetype=bar EventID="4624" logontype="7" | stats count by username , hostname | lookup user_host_lookup.csv username hostname OUTPUT flag | where isnull(flag)

View solution in original post

somesoni2
SplunkTrust
SplunkTrust

Typical way of achieving this would to maintain a lookup table with list of all users and the hostname they have used (in the past). Have this lookup updated by a scheduled saved search which lookup for new logon and keep updating the lookup. Once that is done and available, you can setup your alert to find user logon and compare it against the lookup data. Alert when no match found.

Example saved search for lookup - user_host_lookup.csv

index=foo sourcetype=bar EventID="4624" logontype="7" | stats count by username , hostname | table username , hostname | inputlookup append=t user_host_lookup.csv  | stats count by username , hostname | table username , hostname | eval flag="y" | outputlookup user_host_lookup.csv

Use appropriate time range and cron for this search to avoid overlapping. ( earliest= -20m@m , latest= -5m@m , cron= 5-59/15 * * * *)

You'd need to backfill this search for historical data. You can utilize Summary Index Backfill method for this.

Your alert search can be now this. This will give all the username-hostname combination which are not in lookup already (new combination).

index=foo sourcetype=bar EventID="4624" logontype="7" | stats count by username , hostname | lookup user_host_lookup.csv username hostname OUTPUT flag | where isnull(flag)

desmondpigott
Explorer

Thank you Somesoni2. (And btw, congratulations on the Splunk 2016-17 MVP award!)

I managed to get a proof of concept working for this by scheduling the two lookups as described above, with some additional settings described below.

For the sake of other readers it was important to also do the following:

  1. The second scheduled job using the second (shorter) query above, should be scheduled to run Earliest: -5m@m, Latest: now, Basic search very 5 minutes. Do not make it real time, I created a flood of email for a single event by doing that.

  2. The notification email uses the $RESULT.userName$@yourdomain.com as the recipient of the email message. This will pull out the username from the lookup and use it to send the email directly to each user. (Replace with your own field as it appropriate).

  3. Configure the email alert to fire for each result.

  4. The email alert should use suppression (throttle) with the asterisk "*", for 15 minutes, to prevent multiple alerts being sent if someone quickly logs on/off/on a new computer within a span of < 5 minutes. This will ensure that if the user does this they will only receive a single email for that host. The suppression time of 15 minutes will also give the first scheduled search job a chance to update the CSV file, and avoid sending a duplicate email for the same host 5 minutes later.

  5. I did figure out how to use the Backfill python script (not that hard), but it requires you to have a separate index available to use when you run it. It was very slow to run the backfill but it did finish eventually, and you can actually disable the index when you're done as you don't need it to run this solution, you only need it for back filling. If you don't care about back filling then you don't need the extra temporary index and no need to run the backfill python script, just schedule your search jobs, and let it start building the CSV over time. You can always disable/enable the email alerts when you're ready to notify end users.

Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...