Splunk Search

Having issues reporting information from inner searches in a nested query

nickcardenas
Path Finder

Hello all,

I will try to explain my issue as concisely as possible. I suspect the issue is attributed to me misunderstanding the usage of fields and return.

The use case is using a saved search to resolve an IP to a user who queried a specific domain.

Below is the trouble query:

| savedsearch IPresolver src=$clientIP$  
    [ search sourcetype=DNSlogs (some time range B)
        [ search sourcetype=intel (some time range C)
        | stats values(intelstuff) as queried_domain 
        | format] 
    | return clientIP queried_domain _time] 
| table user queried_domain _time

where IPresolver looks like:

basesearch src=$src$ (some time range A)
|dedup user
|fields user

and my resulting table looks like:

user           queried_domain          _time
johndoe          <blank>               <timestamp from events from the saved search>

I'm having a hard time understanding why I'm able to return clientIP for the purpose of populating a savedsearch command but I'm unable to use queried_domain for reporting purposes in my outer most query. Also, I'm unsure how to report the correct _time as it should be the timestamp from the DNS log events.

I should also note that both inner queries work as expected and this was validated through the following search:

sourcetype=DNS logs (some time range B)
    [search sourcetype=intel (some time range C)
    |stats values(intelstuff) as queried_domain
    |format]
|table clientIP queried_domain _time

I'd greatly appreciate some insight as to why this implementation doesn't work (I checked job inspector and it did not contain useful information).

Thank you!

0 Karma

dmarling
Builder

The problem is that when you use a subsearch to pass search criteria into another search, it doesn't pass it as a new field, but as a filter when used in this context. There are a couple ways to pass in the actual field, one involves using another subsearch to pass in the field after an eval and another way would be to do it with a join on that subsearch to append that data to your resulting event(s).

Here's the subsearch method:

| savedsearch IPresolver src=$clientIP$ 
    [ search sourcetype=DNSlogs 
        [ search sourcetype=intel 
        | stats values(intelstuff) as queried_domain 
        | format] 
    | return clientIP queried_domain _time] 
| eval 
    [ search sourcetype=DNSlogs $clientIP$ 
        [ search sourcetype=intel 
        | stats values(intelstuff) as queried_domain 
        | format] 
    | return queried_domain] 
| eval 
    [ search sourcetype=DNSlogs $clientIP$ 
        [ search sourcetype=intel 
        | stats values(intelstuff) as queried_domain 
        | format] 
    | return _time] 
| table user queried_domain _time

I'm making an assumption that the intel sourcetype contains the clientIP with this query. If that's not the case, feel free to let me know and I can help you adjust it so it uses whatever field that is used to join the events.

If you want to do it with a join, we just need to know what field is being used to join the DNSlogs to the intel logs.

Edited to include the solution to the time should be DNS time problem statement as well.

If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

nickcardenas
Path Finder

Hello! Thank you for your help!

The intel sourcetype only generates a list of domains (clientIP is from the DNS logs). The domain names are then compared to the DNS logs with the field queried_domain which explains the format command on line 5 and the renaming on line 4 of my original posted trouble query. This makes it so that in the DNS portion of my trouble query does something like this: queried_domain=domaina.com AND queried_domain-domianb.com and so on. Matching events in the DNS logs contain clientIP.

I've also considered the eval method but I'm unsure how to actually implement that. I've seen people online do something like |eval something = [search ]

Thanks!

0 Karma

dmarling
Builder

The search I just updated on the answer is doing that, but instead of field= subsearch, I'm having the subsearch generate that with the return command. The problem with this is you have to pass that clientip into your search three times. Shouldn't be a problem if you have this on a dashboard with tokens, but that will get annoying when having to do it by hand each time. We could use map instead to get around that:

| makeresults count=1
| eval clientIP="255.255.255.255"
| map search="| savedsearch IPresolver src=$clientIP$ 
    [ search sourcetype=DNSlogs 
        [ search sourcetype=intel 
        | stats values(intelstuff) as queried_domain 
        | format] 
    | return clientIP queried_domain _time] 
| eval 
    [ search sourcetype=DNSlogs $clientIP$ 
        [ search sourcetype=intel 
        | stats values(intelstuff) as queried_domain 
        | format] 
    | return queried_domain] 
| eval 
    [ search sourcetype=DNSlogs $clientIP$ 
        [ search sourcetype=intel 
        | stats values(intelstuff) as queried_domain 
        | format] 
    | return _time] 
| table user queried_domain _time"
If this comment/answer was helpful, please up vote it. Thank you.
0 Karma

solarboyz1
Builder

According to what's you've written, tour base search IPresolver, ends with | fields user

This means, fields like queried_domain are getting dropped, so they are not available to the table command.

I would adjust the IPResolver line to | fields _time, user, queried_domain

0 Karma

nickcardenas
Path Finder

Hello! This makes sense, however, the change does not affect the output. Regardless, I appreciate your answer!

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...