Splunk Search

Using untable function and perform the join with two fields

khojas02
Engager

Hello Everyone,

I need help with two questions. Please consider below scenario:

index=foo source="A" OR source="B" OR source="C" OR source="D" OR source="E" OR source="F" OR source="G" OR source="H" OR source="I" earliest=-14d
| bin _time span=1d
| fields - index - splunk_server - punct - linecount - splunk_server_group - tag - tag::eventtype - unix_category - unix_group - _raw - eventtype - host - source - sourcetype
| table *
| untable _time FieldName FieldValue
| stats count as Event_count, dc(FieldValue) as distinctCount, mean(FieldValue) as mean by _time FieldName
| sort - _time

The output of above search is table as below
_time FieldName Event_count distinctCount mean
05/13/2020 Field1 520 520

05/13/2020 Field2 77 56

05/13/2020 Field3 1183 1177 450
05/13/2020 Field4 1785 1785 3164.5299719887953

I have similar values for last 14 days and these field values in the FieldName column are coming from various sources as mentioned in the search. Now, is it possible to add the respective source column for each of those field values?
Here some fields are coming from multiple sources so is it possible to divide the count values based on the source it is coming from? (For instance - Field 1 could be coming from source E and F with counts 260 from E and 260 from F)

I have another search as below:

index=foo source=* earliest=-14d
| bin _time span=1d
| stats count as totalCount by _time source
| sort - _time

The output of above search is table as below

_time source totalCount
2020-05-13 A 283
2020-05-13 B 1785
2020-05-13 C 252
2020-05-13 D 507
2020-05-13 E 336
2020-05-13 F 10527
2020-05-13 G 1183
2020-05-13 H 2586

Now, my another question is that I would like to join both of these tables using the columns _time and source to get count of the source from 2nd table to be added in the first table for each field values based on the source it is coming from.

Any help would be appreciated!!

Tags (2)
0 Karma
1 Solution

to4kawa
Ultra Champion
|tstats count as totalCount where index=foo earliest=-14d by _time span=1d source
| append [ search index=foo source="A" earliest=-14d
| bin _time span=1d
| fields - index - splunk_server - punct - linecount - splunk_server_group - tag - tag::eventtype - unix_category - unix_group - _raw - eventtype - host - source - sourcetype
| table *
| untable _time FieldName FieldValue
| stats count as Event_count, dc(FieldValue) as distinctCount, mean(FieldValue) as mean by _time FieldName
| eval source="A" ]
| append [ search index=foo source="B" earliest=-14d
...
| eval source="E"]
| selfjoin _time source
| sort - _time

You'll have to count them one by one.

View solution in original post

0 Karma

to4kawa
Ultra Champion
|tstats count as totalCount where index=foo earliest=-14d by _time span=1d source
| append [ search index=foo source="A" earliest=-14d
| bin _time span=1d
| fields - index - splunk_server - punct - linecount - splunk_server_group - tag - tag::eventtype - unix_category - unix_group - _raw - eventtype - host - source - sourcetype
| table *
| untable _time FieldName FieldValue
| stats count as Event_count, dc(FieldValue) as distinctCount, mean(FieldValue) as mean by _time FieldName
| eval source="A" ]
| append [ search index=foo source="B" earliest=-14d
...
| eval source="E"]
| selfjoin _time source
| sort - _time

You'll have to count them one by one.

0 Karma

khojas02
Engager

This has worked for me. Thank you for your help!!

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...