Splunk Search

How to split event into multiple rows in a table?

mihenn
Path Finder

Hello everyone,

I'm trying to get an analysis of an process log file. The logfile contains an event for every ended process. This Event contains the following data:

Process ID, Starttime, Endtime, bytes_transferred

Now I want to build a timechart for bytes transferred. But I do not want to show the bytes at a single point of time when the process ended. The bytes_transferred should be split up to the interval, the process takes place.

I calculated the duration of the processes and the bytes_per_second of the processes. Now i want to write down a row for each second a process was running with the corresponding calculated timestamp (start_time+1, start_time+2, start_time+2,...., start_time+duration).

How can I do this with Splunk?

Thank you

0 Karma
1 Solution

sundareshr
Legend

Try this

base search | eval event_range=mvrange(starttime, endtime, "1s") | mvexpand eventrange | table process_id event_range bytes_per_second

View solution in original post

anand_singh17
Path Finder

mvexpand 'field_name'

works!!

0 Karma

sundareshr
Legend

Try this

base search | eval event_range=mvrange(starttime, endtime, "1s") | mvexpand eventrange | table process_id event_range bytes_per_second

Richfez
SplunkTrust
SplunkTrust

A few lines of sample data would help us immensely here. Both what you are starting with, and the data you have at the end of what you are doing. It would even help a bit if you could make a mock up version of what you want to see, too.

Thanks!
Rich

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...