Splunk Search

How do I deal with Linux auth.log "Last Message Repeated" log lines when trying to get a count of identical events over a time period.

Path Finder

I'm trying to read in some logs on a Solaris system to check for users failing a login N times over Y seconds. Currently I'm just looking for the log entry that tells me an account was locked out, but I'm trying to get more granular than that. This should be pretty easy, but Solaris and other Linux systems make it difficult by condensing log entries. So an example log might look like this:

[time] User XXX failed login.
[time + 20] Last message repeated 1 times.
[time + 30] User ZZZ failed login.
[time + 31] User XXX failed login.

In this case user XXX failed his login three times quickly, and I'd like to be able to search for that. I'm wondering if anyone can think of a way to do it. I've tried using transactions, but they only have a start and end search, not a required middle. My search using transactions is below.

source="/var/log/authlog" | transaction maxspan=20s maxevents=2 startswith="failed" endswith="last message repeated"

From that sample log the search would pull the following parts of the log into one event:

[time] User XXX failed login.
[time + 20] Last message repeated 1 times.

I think I need some way to grab all three lines; maybe there's another method I'm unaware of? If it's just not possible I can accept that as an answer too.

If it helps at all, Solaris only condenses logs for about 20 seconds before printing out the whole line again.

Esteemed Legend

How about like this:

source="/var/log/authlog" | rex "Last\s+message\s+repeated\s+(?<repeatsNoContext>\d+)\s+times." | fillnull value=0 repeatsNoContext | autoregress repeatsNoContext AS repeatsForMe | eval myCount= 1 + repeatsForMe

This will cause every event to have a field myCount that is correct.

0 Karma

Contributor

plz see my comment to the question

0 Karma

Legend

How about something like this. Assuming all events are time-stamped correctly

index=* sourcetype=* "failed login" | rex "User\s(?<user>\w+)" | timechart span=Ys list(user) as users count | where count>=N

If they aren't time-stamped, then we will need to calculate the time, something like this is worth trying

index=* sourcetype=* "failed login" | rex "^\[(?<time>time)" | rex "\+\s(?<offset>\d+)" | eval time=strftime(strptim(time, directives)+offset, directives) | rex "User\s(?<user>\w+)" | bin span=Ys time | chart list(user) as users count by time

directives - http://strftime.org/
str?time - http://docs.splunk.com/Documentation/Splunk/6.1/SearchReference/Commonevalfunctions

0 Karma

Path Finder

Everything is time-stamped. I was just trying to make the example match a general case. A real example from my logs might be:
Dec 11 05:30:46 myServerName sshd[16484]: [ID 800047 auth.notice] Failed keyboard-interactive for myUserName from 232.181.212.242 port 50908 ssh2
Dec 11 05:31:03 myServerName last message repeated 2 times

0 Karma

Path Finder

Contributor

I wrote that duplicate. Splunk works well with 1 line 1 timestamp per event. You can of course extract that integer. But this kind of solaris condensed log doesnt easily fit the splunk/CIM/ES/PCI way of doing things.

https://answers.splunk.com/answers/256730/how-can-i-make-the-splunk-app-for-pci-compliance-c.html

0 Karma