Getting Data In

multiple events and multiple key-value pairs (one being timestamp) in one json

104K
Engager

Hi

I have series of two key-value pairs (timestamp and some other key) on one json file, which looks like below:

{"k":"host|something","rd":[{"n":1347028874805368,"v":1},{"n":1347028874910007,"v":5},{"n":1347028874912282,"v":5},{"n":1347039314560733,"v":1},{"n":1347039314665657,"v":5},
... {"n":1347443694173854,"v":5}]}

My question is how to make "n" value work as timestamp and v value as value of "v". I am guessing it has something to do with transform.conf though...

Any help will be greatly appreciated! Thank you in advance!

1 Solution

daniel_splunk
Splunk Employee
Splunk Employee

|spath|rename rd{}.n AS file_time| rename rd{}.v AS file_count |eval x=mvzip(file_time,file_count) | mvexpand x|eval x = split(x,",") | eval file_time=mvindex(x,0)|eval file_count=mvindex(x,1)|eval file_time = (file_time/1000000)|convert timeformat="%Y:%m:%d:%H:%M:%S" ctime(file_time)|table file_time, file_count

View solution in original post

daniel_splunk
Splunk Employee
Splunk Employee

|spath|rename rd{}.n AS file_time| rename rd{}.v AS file_count |eval x=mvzip(file_time,file_count) | mvexpand x|eval x = split(x,",") | eval file_time=mvindex(x,0)|eval file_count=mvindex(x,1)|eval file_time = (file_time/1000000)|convert timeformat="%Y:%m:%d:%H:%M:%S" ctime(file_time)|table file_time, file_count

104K
Engager

Thank you. Works like a charm. Is there any way I can do the event breaking at indexing time by the way?

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...