Getting Data In

Why is Splunk not automatically recognizing timestamp of my logs correctly and how do I fix this?

Sjaggie
New Member

Hi,

I have a folder with 21 logs, all different types, but with the exact same format. The event types are different per log file (info / warning / error / etc)

[01/Feb/2016 23:55:58] Failed IMAP login from 192.168.0.25, user jojo
[01/Feb/2016 09:41:34] SMTP server connection from 127.0.0.1 closed after 3 bad commands

Splunk nicely creates 21 different sourcetypes which is great for filtering. Unfortunately, Splunk is not able to identify the DATE correctly. It will assign old log entries from 2010 to some date in 2016. The time is right though. I've played around a lot with the manual sourcetype settings until I found out that I have to erase the data in Splunk first. Then finally the settings from props.conf are applied when importing the logs again.

When I tested this, I created just one sourcetype that was applied to all 21 logs, but this is not nice for filtering. I could manually add 21 logs with 21 different props, but I am wondering is there isn't an easier way to do this.

Can I not "help" the auto sourcetrype detection with the correct date format?

I do find it strange that Splunk is not able to detect such a fairly easy TIME_STAMP at the beginning of each line: [01/Feb/2016 23:55:58] I guess it is because of the [ and ] that screws up the REGEX engine.

Thanks,
Robbert

0 Karma
1 Solution

lguinn2
Legend

I agree, that's weird; I would have expected Splunk to find that timestamp easily. But no problem; this can be fixed, and you don't need to make everything just one sourcetype! In props.conf, you can specify either a host, source or sourcetype for the stanza. So, if all the files came from the same directory (let's assume /var/log/mydata that includes 21 different logs), you could do this

[source::/var/log/mydata/.*]
MAX_TIMESTAMP_LOOKAHEAD = 25
TIME_FORMAT=%d/%b/%Y %H:%M:%S

If it is any consolation, Splunk will always parse the timestamp faster when it is specified in props.conf, since it doesn't have to figure out which format to use.

View solution in original post

0 Karma

lguinn2
Legend

I agree, that's weird; I would have expected Splunk to find that timestamp easily. But no problem; this can be fixed, and you don't need to make everything just one sourcetype! In props.conf, you can specify either a host, source or sourcetype for the stanza. So, if all the files came from the same directory (let's assume /var/log/mydata that includes 21 different logs), you could do this

[source::/var/log/mydata/.*]
MAX_TIMESTAMP_LOOKAHEAD = 25
TIME_FORMAT=%d/%b/%Y %H:%M:%S

If it is any consolation, Splunk will always parse the timestamp faster when it is specified in props.conf, since it doesn't have to figure out which format to use.

0 Karma

Sjaggie
New Member

Hi lguinn,

Thanks for your fast reply. It is good to hear that there is a work around and it sounds even better that this might even speed up the parsing. I'm perfectly happy working in this way. I've adjusted your example of props.conf to my needs. Unfortunately it is not working.

[source::/opt/mailserver/logs/.*]
MAX_TIMESTAMP_LOOKAHEAD = 25
TIME_FORMAT = %d/%b/%Y %H:%M:%S

When I add the logs, after doing a clean eventdata, the dates are still missed by Splunk. The time is correct. Am I missing something here?

Is there a quicker way to force Splunk to reread the logfiles after making a change in the sourcetype or props.conf? Removing the data entry, shutting down Splunk, clean eventdata and restarting Splunk again is not a very desirable way. Specially when I actually do have valid data in Splunk from another logs.

0 Karma

lguinn2
Legend

The best practice is to use a test index when adding a new input. For example, create an index named "test" and then put index=test in inputs.conf. Then you can do splunk clean eventdata -index test and only the test data will be removed. Once the input works properly, then you simply change or remove the index=test line in the inputs.conf

To get Splunk to re-read a particular file, you can use btprobe. Here is a link to the docs and an example:

$SPLUNK_HOME/bin/btprobe -d /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db --file /var/log/inputfile --reset

Finally, if you want Splunk to rescan your props.conf file without restarting Splunk, login to the Splunk GUI and then enter the following URI in the web browser:

http://yoursplunkserver:8000/en-us/debug/refresh

Note that the first part of the URI refers to your Splunk server and the port number that you use to access the GUI.

0 Karma

Sjaggie
New Member

Thanks so much lguinn!

I managed to sort it out with your help and tips. I found out I was editing the wrong config file.

I was making changes to $SPLUNK/etc/apps/search/local/props.confinstead of $SPLUNK/etc/system/local/props.conf.

I got here a lot quicker not having to clean and restart Splunk all the time.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...