Hey Splunkers,
Here is my original query where the sub search is getting truncated to 50000 records.
index = abc sourcetype=abc_errors
| rename device.headwaters.watermark.core.DeviceInfo.receiverId.string AS receiverId
| fields receiverId
| join receiverId[search index=abc sourcetype=abc_temp|fields receiverId billingId]
| table receiverId billingId
I am trying to write a stats
command for it so that I don't have to use join
. Here is what I thought might work but doesn't.
index = abc (sourcetype=abc_errors OR sourcetype=abc_temp)
| fields sourcetype receiverId billingId device.headwaters.watermark.core.DeviceInfo.receiverId.string
| rename device.headwaters.watermark.core.DeviceInfo.receiverId.string AS receiverId
| dedup receiverId sourcetype
| stats count AS total by receiverId
| where total>1
| table receiverId
Can someone tell me what I might be doing wrong? I know there is something funky about the dedup
, but I can't think of anything else right now.
Thanks,
Divyank
I figured out a way to do it, I took the coalesce idea from @adonio . Thank you for that. Here is the solution query:
index = abc (sourcetype=abc_errors OR sourcetype=abc_temp)
| fields sourcetype receiverId billingId device.headwaters.watermark.core.DeviceInfo.receiverId.string
| rename device.headwaters.watermark.core.DeviceInfo.receiverId.string AS Receiver
| eval receiver_id = coalesce(Receiver, receiverId )
| dedup receiver_id sourcetype
| stats count(sourcetype) AS total BY receiver_id
| where total>1
| stats count(receiver_id) AS match
Thank you everyone for your input