Splunk Search

Gathering Stats on Log Entries within a Time Period Denoted by Log Entries in another Log

derekwalsh_1
Explorer

Hi Guys,

I have log entries in one log file that denote the start and end of a time frame of interest in my logs. These log entries look like this:

2014/05/01 11:50:47.255 StartedJob Job=Job-105
2014/05/01 11:52:26.545 EndedJob Job=Job-105'

Pretty easy to make a transaction out of this and figure out when my job started and when my job ended and how long my job took. Now, in another log source I'm recording my system load (every second) as the number of events/second that my system is generating. The entries in that log look like this

2014/05/01 11:50:46.000 Events=27
2014/05/01 11:50:47.000 Events=234
2014/05/01 11:50:48.000 Events=269
2014/05/01 11:50:49.000 Events=307
2014/05/01 11:50:50.000 Events=145
2014/05/01 11:50:51.000 Events=14
...

I would love to show system load statistics (let's say count of system events) while a Job is running. Oh and another little wrinkle, Jobs will overlap sometimes (which is fine, I will likely observe spikes in system load). I'd like to analyze the effect of particular Jobs on system load.

Now maybe it would be more useful to annotate a "System Load" line chart with Job (Started and Ended) events? I guess I could post that as another question

It would be awesome if someone could point me in a direction here.

Thanks

0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

You can do something like this:

base search for job starts and ends | transaction over your job id | eval end_time = _time + duration | map maxsearches=0 search="sourcetype=system_load earliest=$_time$ latest=$end_time$ | eval job=$job$ | stats sum(Events) by job"

That should run one search for each job, limited to the runtime of the job and summing up the event counter per job. You may need to adjust the syntax, field names, etc. to fit your environment.

View solution in original post

martin_mueller
SplunkTrust
SplunkTrust

You can do something like this:

base search for job starts and ends | transaction over your job id | eval end_time = _time + duration | map maxsearches=0 search="sourcetype=system_load earliest=$_time$ latest=$end_time$ | eval job=$job$ | stats sum(Events) by job"

That should run one search for each job, limited to the runtime of the job and summing up the event counter per job. You may need to adjust the syntax, field names, etc. to fit your environment.

derekwalsh_1
Explorer

Yep. Sweet. Minor correction


base search for job starts and ends | transaction over your job id | eval end_time = _time + duration | map maxsearches=0 search="search sourcetype=system_load earliest=$_time$ latest=$end_time$ | eval job=$job$" | stats sum(Events) by job

0 Karma

derekwalsh_1
Explorer

Sorry, I would if I actually had any 🙂

0 Karma

lguinn2
Legend

This would be easier if you actually posted your searches. Thanks!

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...

Edge Processor Scaling, Energy & Manufacturing Use Cases, and More New Articles on ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Get More Out of Your Security Practice With a SIEM

Get More Out of Your Security Practice With a SIEMWednesday, July 31, 2024  |  11AM PT / 2PM ETREGISTER ...