Splunk Enterprise

Field Extraction (aide.log) not showing

aturhano
Loves-to-Learn Lots

Hi,
I'm trying to extract File, Directory, mtime, ctime from aide.log in Linux systems. So far I set up below in props.conf under Splunk_TA_nix/local. But I don't see the fields showing up on the web (on the left column). What could be the problem? Your help is greatly appreciated. Thanks.

[aide]
SHOULD_LINEMERGE = true
NO_BINARY_CHECK = true
TIME_PREFIX = Mtime\s{4}:\s\d{4,}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\s{14},\s
BREAK_ONLY_BEFORE = ((File:|Directory:))
CHARSET = UTF-8
EXTRACT-mtime = (Mtime\s{4}:\s\d{4,}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\s{14},\s(?\d{4,}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}))
EXTRACT-ctime = (Ctime\s{4}:\s(?\d{4,}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}))
EXTRACT-file = File:\s(?P[\/]{1,}(\w|.)+)
EXTRACT-directory = Directory:\s(?P[\/]{1,}(\w|.)+)

Sample log format is:

File: /usr/share/locale/hu/LC_MESSAGES/gnupg2.mo
Ctime : 2017-06-05 06:32:00 , 2018-09-13 16:37:11
Inode : 1573959 , 1573958

Directory: /usr/share/locale/es/LC_MESSAGES
Mtime : 2018-07-13 10:27:02 , 2018-09-13 16:37:16
Ctime : 2018-07-13 10:27:02 , 2018-09-13 16:37:16

File: /usr/share/locale/es/LC_MESSAGES/sos.mo
Mtime : 2018-04-13 11:05:35 , 2018-07-25 07:00:49
Ctime : 2018-07-13 10:26:14 , 2018-09-13 16:37:16
Inode : 1446886 , 1446885
MD5 : RKLbELKW5HsioSJ7bM9gww== , VgzX3Er81Q8mFGfQjUg6BQ==
RMD160 : PyFCxLjh+5uE3mg7nuqCzyyCebo= , Lr/v1Vcl90MrhP4+pn6eeYCG76g=
SHA256 : dy7si25ohaOYpS5zY/ZUoyvbabd6GoUe , JuioPCXbqvk7vUXVWm3GeX3PBKlrMwuG

File: /usr/share/locale/es/LC_MESSAGES/gnupg2.mo
Ctime : 2017-06-05 06:32:00 , 2018-09-13 16:37:11
Inode : 1446007 , 1446006

Directory: /usr/share/locale/nds/LC_MESSAGES
Mtime : 2018-07-13 10:26:14 , 2018-09-13 16:37:16
Ctime : 2018-07-13 10:26:14 , 2018-09-13 16:37:16

Tags (1)
0 Karma

saravanan90
Contributor

There  is an option in aide to get the logs in syslog format (in a single line). Please add the below lines in aide.conf.

syslog_format = true

report_url=syslog:LOG_AUTH

0 Karma

to4kawa
Ultra Champion
| makeresults 
| eval _raw="File: /usr/share/locale/hu/LC_MESSAGES/gnupg2.mo
Ctime : 2017-06-05 06:32:00 , 2018-09-13 16:37:11
Inode : 1573959 , 1573958

Directory: /usr/share/locale/es/LC_MESSAGES
Mtime : 2018-07-13 10:27:02 , 2018-09-13 16:37:16
Ctime : 2018-07-13 10:27:02 , 2018-09-13 16:37:16

File: /usr/share/locale/es/LC_MESSAGES/sos.mo
Mtime : 2018-04-13 11:05:35 , 2018-07-25 07:00:49
Ctime : 2018-07-13 10:26:14 , 2018-09-13 16:37:16
Inode : 1446886 , 1446885
MD5 : RKLbELKW5HsioSJ7bM9gww== , VgzX3Er81Q8mFGfQjUg6BQ==
RMD160 : PyFCxLjh+5uE3mg7nuqCzyyCebo= , Lr/v1Vcl90MrhP4+pn6eeYCG76g=
SHA256 : dy7si25ohaOYpS5zY/ZUoyvbabd6GoUe , JuioPCXbqvk7vUXVWm3GeX3PBKlrMwuG

File: /usr/share/locale/es/LC_MESSAGES/gnupg2.mo
Ctime : 2017-06-05 06:32:00 , 2018-09-13 16:37:11
Inode : 1446007 , 1446006

Directory: /usr/share/locale/nds/LC_MESSAGES
Mtime : 2018-07-13 10:26:14 , 2018-09-13 16:37:16
Ctime : 2018-07-13 10:26:14 , 2018-09-13 16:37:16" 
| rex max_match=10 "(?s)(?<message>^(File|Directory):.+?)(\s\s|$)" 
| stats count by message 
| rex field=message max_match=10 "(?m)(?<fieldname>^\w+)\s*:\s+(?<value>.+$)" 
| streamstats count as session 
| eval tmp=mvzip(fieldname,value,"=") 
| stats count by session tmp 
| rex field=tmp "(?<fieldname>^\w+)=(?<value>.+$)" 
| eval {fieldname}=value 
| fields - count tmp 
| foreach * 
    [ eval <<FIELD>> = split(<<FIELD>>,",") 
    | eval <<FIELD>> = trim(<<FIELD>>)] 
| stats list(*) as * by session 
| fields - fieldname value 
| eval counter=mvrange(0,2) 
| mvexpand counter 
| foreach * 
    [ eval <<FIELD>>=if(mvcount(<<FIELD>>)=2,mvindex(<<FIELD>>,counter),<<FIELD>>)] 
| fields - session counter 
| eval File_Directory=coalesce(File,Directory)
| eval type=if(isnull(Directory),"F","D")
| table File_Directory type Mtime Ctime Inode MD5 RMD160 SHA256

Hi, @aturhano
I see this is not what you want. but I've done it.

If you want, please use this.

0 Karma

jscraig2006
Communicator

Try this:

  Mtime\s\:\s(?<Mtime>\d{4,}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\s\,\s\d{4}\-\d{2}\-\d{2}\s\d{2}\:\d{2}\:\d{2})
0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...