I have a bunch of logs I've added to splunk and created sourcetypes for these logs. These logs are updated once a week when a cron runs. Each log only has a small amount of data in them and usually we check these logs to make sure the cron ran and processes were successful. for each log we're only talking about a few lines of text.
So we know what is considered "normal" and "successful" from these few lines inserted once a week when the cron runs. However what we want to do is have splunk tell us when something is "not" normal and alert us. Problem is, we don't always know what that would be! In short, we don't know the types of errors we would get. We just now "if it doesn't look like this, tell us".
Here's an example of something you might see in a log that is normal.
Starting at Sun Aug 28 23:59:01 UTC 2011
All domains hosted on nameservers currently sponsored by the Registrar
==> Retrieving data from database.
==> Splitting report data by registrar.
Ended at Mon Aug 29 00:19:10 UTC 2011>
How would I be able to tell splunk to monitor the log and look for anything other then what I posted? I was looking at the filesystem monitor but I'm not sure how I would use it in this situation.
starting reading your post, the first thing that came to my mind was use scripted inputs...
and to be honest, it was on my mind until the end of your post. if you already have a script and you know that the output should look like what is returned, then use this script to tell splunk when something is not good and your done.