Dashboards & Visualizations

How do I create a custom dashboard plotting time against a specific value, a number I need extracted from the 8th position in a list?

vineetc
Engager

I am planning to create a dashboard in which I have to plot time vs a very specific value(custom) to be extracted from the log file. The log file looks like this:

Sep 30 11:10:59 xyzhost.com Sep 30 11:08:12 qa[0x80e00073][latency][info] wsgw(service): trans(11303955)[1.1.1.1]: Latency:   0   1   0   1   1   1   0 145 146 145 146 146 146 145   1   1 [https://xyzhost/x/y]
host = xyzhost.com source = udp:4000 sourcetype = syslog
9/30/15
11:10:58.000 AM 
Sep 30 11:10:58 xyzhost.com Sep 30 11:08:12 qa[0x80e00073][latency][info] wsgw(service): trans(13115233)[1.1.1.1]: Latency:   0   2   0   2   2   2   0 171 173 171 173 173 172 172   2   2 [[https://xyzhost/x/y]

I have to extract - Position 8th number after Latency: ??? How do I do that.
In the above logs the number would be 145 and 171...So the Dashboard should plot like this

Time vs Value
Sep 30 11:10:59 vs 145
Sep 30 11:10:58 vs 171

How to achieve this? I'm just starting to learn Splunk.

0 Karma
1 Solution

maciep
Champion

you can use a search command called "rex" to grab a named regex capture group, and splunk will create a new field for it. In this case, I'm trying to grab the 8th digit after the Latency part of the event and putting it in a field called "cool_num". And then you can just create a table of _time (the timestamp) and then new field you extracted.

Your base search .... | rex "Latency:\s+(\d+\s+){7}(?<cool_num>\d+)" | table _time cool_num

View solution in original post

maciep
Champion

you can use a search command called "rex" to grab a named regex capture group, and splunk will create a new field for it. In this case, I'm trying to grab the 8th digit after the Latency part of the event and putting it in a field called "cool_num". And then you can just create a table of _time (the timestamp) and then new field you extracted.

Your base search .... | rex "Latency:\s+(\d+\s+){7}(?<cool_num>\d+)" | table _time cool_num

vineetc
Engager

Thanks for you response - I was able to achieve it but if I would like to plot a service name when the mouse is hovered on the plotted graph. The service name should be obtained from the log file itself. I my case the service name is "service". Below is the log

Sep 30 11:10:59 xyzhost.com Sep 30 11:08:12 qa[0x80e00073][latency][info] wsgw(service): trans(11303955)[1.1.1.1]: Latency: 0 1 0 1 1 1 0 145 146 145 146 146 146 145 1 1 [https://xyzhost/x/y]
host = xyzhost.com source = udp:4000 sourcetype = syslog
9/30/15
11:10:58.000 AM

Sep 30 11:10:58 xyzhost.com Sep 30 11:08:12 qa[0x80e00073][latency][info] wsgw(service): trans(13115233)[1.1.1.1]: Latency: 0 2 0 2 2 2 0 171 173 171 173 173 172 172 2 2 [[https://xyzhost/x/y]

0 Karma

maciep
Champion

I'm not sure what kind of chart you're looking at or how you want it to appear. Are you trying to plot the numbers by the service over time? If so, something like this might work.

... | rex "wsgw\((?<service>[^\)]+).+Latency:\s+(\d+\s+){7}(?<cool_num>\d+)" | timechart  max(cool_num) by service
0 Karma

vineetc
Engager

Thanks a lot - It did work actually but I get the max latency of the service per time interval , which is great and will work. One think I didnt get is that by putting just one service name in the search expression actually plotted several lines graph for all the distinct service name the log file had. Is Splunk intuitive to guess what we actually mean even though I placed just one service name in the whole expression.

0 Karma

maciep
Champion

I think I understand what you're asking. The "service" referenced in the search isn't a literal string. We're actually using the rex command to extract whatever is in the parentheses after "wsgw" and storing that value in a new field named "service". Then in the timechart command, we're telling splunk to group the results by the various values in that "service" field.

Also a quick note on the max aggregate. Usually when you plot against time in Splunk, you will have multiple values for a chunk of time. So you have to use some sort of aggregate function to tell Splunk how to combine all of those values - min, max, avg, sum, etc. And you can specify multiple aggregates if you want too. For example, if you wanted to see both the min and max values.

0 Karma

vineetc
Engager

Thanks again - but looks like, I would like to know a bit more. Can we OR the entire rex so that to store the 8th number in cool_num checking anything after "wsgw(( and putting in service.

What if some of the log lines doesnt begin with wsgw but with mpgw.

I tried this but it doesnt work -

rex "wsgw((?[^)]+).+Latency:\s+(\d+\s+){7}(?\d+)" OR rex "mpgw((?[^)]+).+Latency:\s+(\d+\s+){7}(?\d+)"

say you can use an OR between regex

0 Karma

vineetc
Engager

Well looks like I am getting better - I put a pipe to make it work.

host="phx10xwsdpi8001.lcc.usairways.com" | rex "wsgw((?[^)]+).+Latency:\s+(\d+\s+){7}(?\d+)" | rex "mpgw((?[^)]+).+Latency:\s+(\d+\s+){7}(?\d+)" | timechart max(cool_num) by service

0 Karma

maciep
Champion

On this forum, don't forget to put your splunk search code inside the code blocks - if you just paste it in the comment box, a bunch of stuff gets parsed out for some reason.

But back to the question. Instead of rex'ing twice, can we generalize our regex to grab service no matter what those letters are. For example, if the pattern is always "[info] xxxx(service)", then something like this might work.

\[info\][^\(]+\((?<service>[^\)]+).+Latency:\s+(\d+\s+){7}(?<cool_num>\d+)
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...