I have a series of servers that run apache that serve up the same url via post 99% of the time and in high volume. Indexing them individually would each up way too much of the indexing volume so currently they're excluded.
Using awk I can process the file at log rotation time and produce aggregates like this:
28/Sep/2011:11:40 count=20393 avgsize=32535 avgtime=150 maxtime=710
For a five minute interval per server.
I'd like the information more real time then waiting until the end of the week. Is there any way to do this completely within spunk (without indexing every access log entry)? Is there another way that I can cron something to run periodically to a log file that then spunk eats?
I would suggest monitoring the file directly and using null Queue routing to prevent the data from being indexed. All you'd need to do would be come up with a regex to match the url that is showing up 99% of the time. Instructions for this can be found here:
To answer your question, you could write a script and put it in the cron tab if you'd like, and splunk can eat the file via a monitored stanza, but I think you'd be better off doin the null queue routing and then just using the search language to produce the output you are interested in seeing.
Thanks. I'm already filtering them to exclude them from indexing. I'm guessing there is nothing more I can do with splunk at that point (like also send them to a text file that I could process via a cron job)?
At this point I may have to spend some quality time with sed & awk to process the apache log at intervals, keep track of where I am in the file and hand feed splunk.