Splunk Search

Use Rex extracted field to do duration calculations

last_dance
Observer

I have a bunch of logs contains different table operation, and I want to check how much time each table operation cost. I extract the table name from the message, however, How to use this extracted value to calculate the duration of table operation for each. Got stuck here.

index=stg_heroku app_env=ppr app_name=datamigration |rex field=_raw "data retrieval for table (?<table_name>\w+) is starting" |transaction 'table_name' startswith="starting" endswith="json is pushed" |stats perc90(duration) as 90%_Consumed_Time

I try to extract table_name using rex, and use table_name for transaction for grouping. 

last_dance_0-1615885226796.png

 

Could someone can help on this?

 

Labels (2)
0 Karma

diogofgm
SplunkTrust
SplunkTrust

Use

eval epoch = strptime(your_time_field, “%Y-%m-%d ... remaining time format ”)

| delta epoch

complete the time format with your time stamp 

the eval converts the time stamp into epoch with is in seconds and the delta calculates the diff between each epoch 

if you want after that you can make more readable using

| eval duration = tostring(delta, “duration”)

------------
Hope I was able to help you. If so, some karma would be appreciated.
0 Karma

last_dance
Observer

Thank you!

What I want to do is first to extract table name then calculate duration of one table operation. 

For each table, we have 6 operation log , however different tables is been worked asynchronously. And all the event log  from different overlap . 

Like the one table operation log below, I can search by the table name to list them all ,then calculate the duration.  However, how can I use the extracted table name in further calculation using transaction?

 

{"heroku-guid":"4d22a1d1-4dca-41f0-87ad-ab8804017095","heroku-dyno":"web.2","@timestamp":"2021-03-10T01:22:56.097-08:00","app_env":"ppr","app_name":"datamigration","app_version":"no","env_label":"no","migrationId":"26232","clinid":"azhao5yu","thread_name":"Async-Data-Reading-Thread-6","level":"DEBUG","logger_name":"com.aligntech.ids.services.migration.job.TableMigrationTaskProcessor","message":"data retrieval for table tblpupatientimages is completed"}
{"heroku-guid":"4d22a1d1-4dca-41f0-87ad-ab8804017095","heroku-dyno":"web.2","@timestamp":"2021-03-10T01:22:56.096-08:00","app_env":"ppr","app_name":"datamigration","app_version":"no","env_label":"no","migrationId":"26232","clinid":"azhao5yu","thread_name":"Async-Data-Reading-Thread-6","level":"DEBUG","logger_name":"com.aligntech.ids.services.migration.job.TableMigrationTaskProcessor","message":"tblpupatientimages json is pushed to file align/migration/pending/azhao5yu/tblpupatientimages/tblpupatientimages_0.json"}
{"heroku-guid":"4d22a1d1-4dca-41f0-87ad-ab8804017095","heroku-dyno":"web.2","@timestamp":"2021-03-10T01:22:55.735-08:00","app_env":"ppr","app_name":"datamigration","app_version":"no","env_label":"no","migrationId":"26232","clinid":"azhao5yu","thread_name":"Async-Data-Reading-Thread-6","level":"DEBUG","logger_name":"com.aligntech.ids.services.migration.job.TableMigrationTaskProcessor","message":"data retrieval for table tblpupatientimages, index 0 is completed"}
{"heroku-guid":"4d22a1d1-4dca-41f0-87ad-ab8804017095","heroku-dyno":"web.2","@timestamp":"2021-03-10T01:22:55.715-08:00","app_env":"ppr","app_name":"datamigration","app_version":"no","env_label":"no","migrationId":"26232","clinid":"azhao5yu","thread_name":"Async-Data-Reading-Thread-6","level":"DEBUG","logger_name":"com.aligntech.ids.services.migration.job.TableMigrationTaskProcessor","message":"data retrieval for table tblpupatientimages, index 0 is starting"}
{"heroku-guid":"4d22a1d1-4dca-41f0-87ad-ab8804017095","heroku-dyno":"web.2","@timestamp":"2021-03-10T01:22:55.714-08:00","app_env":"ppr","app_name":"datamigration","app_version":"no","env_label":"no","migrationId":"26232","clinid":"azhao5yu","thread_name":"Async-Data-Reading-Thread-6","level":"DEBUG","logger_name":"com.aligntech.ids.services.migration.job.TableMigrationTaskProcessor","message":"data retrieval for table tblpupatientimages is starting"}
{"heroku-guid":"4d22a1d1-4dca-41f0-87ad-ab8804017095","heroku-dyno":"web.2","@timestamp":"2021-03-10T01:22:52.560-08:00","app_env":"ppr","app_name":"datamigration","app_version":"no","env_label":"no","migrationId":"26232","clinid":"azhao5yu","thread_name":"Async-Data-Reading-Thread-6","level":"INFO","logger_name":"com.aligntech.ids.services.migration.job.AbstractTaskProcessor","message":"triggered task for tblpupatientimages, identifying column name vip_patient_id, total values to be retrieved 28"}

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...