Getting Data In

multiple events and multiple key-value pairs (one being timestamp) in one json

104K
Engager

Hi

I have series of two key-value pairs (timestamp and some other key) on one json file, which looks like below:

{"k":"host|something","rd":[{"n":1347028874805368,"v":1},{"n":1347028874910007,"v":5},{"n":1347028874912282,"v":5},{"n":1347039314560733,"v":1},{"n":1347039314665657,"v":5},
... {"n":1347443694173854,"v":5}]}

My question is how to make "n" value work as timestamp and v value as value of "v". I am guessing it has something to do with transform.conf though...

Any help will be greatly appreciated! Thank you in advance!

1 Solution

daniel_splunk
Splunk Employee
Splunk Employee

|spath|rename rd{}.n AS file_time| rename rd{}.v AS file_count |eval x=mvzip(file_time,file_count) | mvexpand x|eval x = split(x,",") | eval file_time=mvindex(x,0)|eval file_count=mvindex(x,1)|eval file_time = (file_time/1000000)|convert timeformat="%Y:%m:%d:%H:%M:%S" ctime(file_time)|table file_time, file_count

View solution in original post

daniel_splunk
Splunk Employee
Splunk Employee

|spath|rename rd{}.n AS file_time| rename rd{}.v AS file_count |eval x=mvzip(file_time,file_count) | mvexpand x|eval x = split(x,",") | eval file_time=mvindex(x,0)|eval file_count=mvindex(x,1)|eval file_time = (file_time/1000000)|convert timeformat="%Y:%m:%d:%H:%M:%S" ctime(file_time)|table file_time, file_count

104K
Engager

Thank you. Works like a charm. Is there any way I can do the event breaking at indexing time by the way?

0 Karma
Get Updates on the Splunk Community!

Splunk Enterprise Security 8.x: The Essential Upgrade for Threat Detection, ...

 Prepare to elevate your security operations with the powerful upgrade to Splunk Enterprise Security 8.x! This ...

Get Early Access to AI Playbook Authoring: Apply for the Alpha Private Preview ...

Passionate about security automation? Apply now to our AI Playbook Authoring Alpha private preview ...

Reduce and Transform Your Firewall Data with Splunk Data Management

Managing high-volume firewall data has always been a challenge. Noisy events and verbose traffic logs often ...