Splunk Search

Updating fields in a lookup through a scheduled search

tmarlette
Motivator

Fellow Splunkers!

I am attempting to update fields within a lookup file, and fortunatley there are only 2 fields. I am in attempt to keep track of last login time to a system, which equates to user and login in a lookup table.

Here is my search:

index=myIndex sourcetype=mySourcetype eventtype=userLogin | stats latest(_time) as "login" by user | eval login=login | convert ctime(login) as login | outputlookup login servers_logins.csv

sometimes the user column will need to be appended with a new user, but mostly it will just be the login time that needs updated.

We are trying to keep history of when a user logs in for a couple years by appending this file, and use it to disabled inactive accounts.
I've read this answer, though I can't seem to get this to work for me as i've tried it above.

I know after I get this right, I can just schedule this search to run everyday in order to keep it up to date.

Any help would be greatly appreciated

0 Karma

Runals
Motivator

In similar situations I do something that is comparable to what aljohnson posted but I don't use the append command. Rather I leverage that ability within the lookup command itself.

search | format with table | inputlookup append=t server_logins.csv | stats max(your time field name) by user | outputlookup server_logins.csv

While not needed for this use case you can take this a step further to purge old entries on lookups within the same search. For example if you wanted to dump users from this list after, say, 30 days you could something like

search | format with table | inputlookup append=t server_logins.csv | stats max(your time field name) by user | eval current_time = now() | where login_time > (current_time - (86400 * 30)) | outputlookup server_logins.csv
0 Karma

aljohnson_splun
Splunk Employee
Splunk Employee

This is a common problem, and the solution isn't necessarily intuitive. Here is what I would do:

  1. First load your lookup with inputlookup - this is necessary if we want to ever update our existing data.
  2. Run the search you want to generate the data and append it, or conversely, run the search, and append the lookup. The ordering doesn't matter.
  3. Get the latest value of login and once again split by user with stats
  4. Use outputlookup to save the reuslts

    inputlookup server_logins.csv
    | append
    [search index=myIndex sourcetype=mySourcetype eventtype=userLogin
    | stats latest(_time) as "login" by user
    | eval login=login
    | convert ctime(login) as login ]
    | stats latest(login) as login by user
    | outputlookup server_logins.csv
    Now you can just schedule the search to run as often as you would like - adding earliest and latest to match.

lguinn2
Legend

If you are willing to write some code, I think that the KV store might be an even better alternative, since it supports direct update operations.

Get Updates on the Splunk Community!

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...

New Dates, New City: Save the Date for .conf25!

Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place ...

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...