Currently I have a simple perl script set up to email me if a couple directories get above a certain quota along with a couple other warnings. That's all good but I'd like to move this monitoring into splunk. So I guess my question would be what would be the best way to do that?
I tried creating a simple log with a script that is simply
du -s --exclude=.* /directory >> $LOGFILE and I can get splunk to index it easy enough but I'm at a loss on what to do after that. How would I compose a search that would set up a timechart with multiple lines(each line being a monitored directory)?
I've been playing with setting up different dashboards but I think I'm just missing either how to form the search query or how to set up a more sophisticated log.
I've been reading through alot of the documentation and have no problem doing so, therefore if this is outlined somewhere else please point me in that direction.
Also I'm using the *NIX app.
Please post a sample event or two. Without knowing exactly what format your seeing (and how your events are being broken up), it's difficult to provide a good example search. (You can add this to your existing question, using the "edit" link. (Indent 4 spaces to prevent any weird formatting.)
Here are some thoughts...
Ok some basics, you should be able to pull your event into a simple table using something like this:
source=/path/to/your/logfile | rex "^(?<total_kb>\d+)\s+(?<path>/.*)$" | table _time, path, total_kb
Now, if you want to do some visual trend analysis, a simple "timechart" search should do the trick: (You'll need to switch to a charting view to see this as a graph. Click the "show report" link.)
source=/path/to/your/logfile | rex "^(?<total_kb>\d+)\s+(?<path>/.*)$" | timechart avg(eval(total_kb/1024)) as total_mb by path
Now, if you want to get more fancy and setup some size limits, so you only see events where the size is exceeded; you could use a search like this:
source=/path/to/your/logfile | rex "^(?<total_kb>\d+)\s+(?<path>/.*)$" | eval limit_kb=case(path=="/home/user", 100000, path=="/mnt/backup", 10000000, 0==0, 50000) | where total_kb > limit
Notice that the
0==0is always true; so basically this is saying use 50000 as the default limit for any path other than the two explicitly listed.
I hope that gives you some ideas on how to get started. I've made some basic assumptions about how your events will probably look based on running
du -s on my system, but the regex may be different for you.
Here are some takeaways, (aka homework), if your interested...
1.) Figure out how to get the regular expression show above into a configuration file so you don't need to use a "rex" command for all of your searches. (Hint: check out the
EXTRACT entry in the
2.) See if you can setup those per-path limits in a lookup file. This way you can have a simple CSV file where you store all your paths, and the size limits for each path.
You could use
BREAK_ONLY_BEFORE, however, a much simpler and better approach would be to simply set
SHOULD_LINEMERGE=False which means that each line will become it's own event; which I think is all you want this case.