Reporting

Need workaround- Loadjob does not work on ad hoc searches in search cluster

Tedesco1
Path Finder

We currently use a single search head with an index cluster. I have written a large number of relatively complex dashboards that utilize loadjob. I recently learned, on the documentation page for the loadjob command, it states that on a search cluster, loadjob cannot be run on ad hoc searches.

I have a few questions:

1) Do searches written as part of dashboard xml count as ad-hoc searches? Will I need to rewrite / change these when we move to a search head cluster?

2) Are there any workarounds I can utilize? I am concerned about dashboard efficiency without loadjob, as I am in many cases utilizing the same search multiple times (in some dashboards using the same results more than 5 times). I don't want to run the search over and over in the dashboard, and I don't want to schedule searches for every dashboard (much of the value to our business comes from the very up-to-date nature of our data, so I would have to have a lot very frequent scheduled searches, which does not seem ideal to me, although it is an option if I can keep each search relatively performant.)

First generic example of ways I am utilizing loadjob command. This example would all be in the context of a single dashboard. Below this example I have pasted a real example of a dashboard that I have created.

<search>
  <query>
    index=foo source=source1 field1=bar
    | non-relevant SPL here
  </query>
  <done>
    <set token="search1">$job.sid$</set>
  </done>
</search>

<search>
  <query>
  index=foo source=source2 [| loadjob $search1$ | stats values(field1) as field1]
  | non-relevant SPL here
  </query>
  <done>
    <set token="search2">$job.sid$</set>
  </done>
</search>

<row>
  <panel>
    <table>
      <search>
        <query>
          index=foo2 source=source3
          | non-relevant SPL here
          | append
            [| loadjob $search1$
            | non-relevant SPL here]
          | eval field3=[| loadjob $search2$ | top1 field3 | return $field3]
        </query>
      </search>
    </table>
  </panel>
</row>

Here is an example of a real dashboard I have written in case this is helpful, with a few things redacted as needed. (I went a little nuts with the redactions in some places, apologies, but the point of my question is around the loadjob functionality which is all there).

<dashboard>
  <label>CL2 Queue Dashboard</label>
  <search>
    <query>
      index=def_mfg source=line_index production_area=--REDACTED-- earliest=-21d@d
      | sort 0 -_time
      | delta _time as timespan
      | search event="running"
      | eval timespan=-timespan
      | transaction maxpause= --REDACTED--
      | stats sum(timespan) as timespan values(source) as source by _time
      | where timespan>= --REDACTED--
      | sort 0 -_time
      | streamstats count as index_number
      | fields - timespan
    </query>
    <done>
      <condition>
        <set token="indexes_base">$job.sid$</set>
      </condition>
    </done>
    <refresh>300</refresh>
  </search>
  <search>
    <query>
      index=def_mfg source=punch (employee_kronos_group=--REDACTED-- OR employee_kronos_group=--REDACTED--) earliest=-28d@d
      | regex work_center="^(--)$"
      | sort 0 -_time trigger_timing
      | dedup record_trace_key
      | where trigger_timing="after"
      | table start source pwo work_center cwo
      | rename start as _time
      <!-- groups punches (by start time) with line indexes-->
      | append
        [| loadjob $indexes_base$]
      | append
        [| loadjob $indexes_base$
        | fields _time source
        | eval _time=_time - --REDACTED--,
          source="txn_start"]
      | append
        [| loadjob $indexes_base$
        | fields _time source
        | eval _time=_time - --REDACTED--,
          source="txn_end"
        | append
          [| makeresults
          | eval source="txn_end"]]
      | eval pwo_work_center=pwo.",:,".cwo.",:,".work_center
      | sort 0 -_time
      | transaction startswith=eval(source="txn_start") endswith=eval(source="txn_end")
      | fields - eventcount linecount joiner field_match_sum closed_txn timespan duration pwo work_center _raw source cwo
      <!--creates new events with line index number for each punch-->
      | mvexpand pwo_work_center
      | rex field=pwo_work_center "(?<pwo>.*),:,(?<cwo>.*),:,S(?<work_center>.*)"
      <!-- adjusts index number to match the index number of when that truck came off-line, then finds most common pwo punch for that lineslot-->
      | eval index_number=index_number - (--REDACTED-- - work_center)
      | append
        [| loadjob $indexes_base$
        | fields - source
        | rename _time as offline_time]
      | stats mode(pwo) as pwo mode(cwo) as cwo values(offline_time) as offline_time by index_number
      | where index_number>0 AND NOT (isnull(pwo) OR isnull(cwo) OR pwo="" OR cwo="")
    </query>
    <done>
      <condition>
        <set token="index_base">$job.sid$</set>
      </condition>
    </done>
  </search>
  <search>
    <query>
      index=def_mfg source=work_order (tag=--REDACTED-- OR tag=--REDACTED--) earliest=-50d@d
        [| loadjob $index_base$
        | stats values(pwo) as pwo]
      | table _time cwo pwo status part_number description tag start
      | sort 0 -_time
      | dedup cwo
      | stats values(eval(if(tag="--REDACTED--",status,null()))) as capsule_status values(eval(if(tag="--REDACTED--",status,null()))) as main_line_status values(eval(if(tag="--REDACTED--",start,null()))) as start by pwo
    </query>
    <done>
      <condition>
        <set token="status_base">$job.sid$</set>
      </condition>
    </done>
  </search>
  <search>
    <query>
      index=def_mfg source=punch (--REDACTED FILTER HERE--) earliest=-21d@d
        [| loadjob $index_base$
        | stats values(pwo) as pwo]
      | regex work_center="^(--REDACTED--)$"
      | sort 0 -_time trigger_timing
      | dedup record_trace_key
      | where trigger_timing="after"
      | stats avg(start) as _time by pwo work_center
      | sort 0 -_time
      | dedup pwo
    </query>
    <done>
      <condition>
        <set token="punch_base">$job.sid$</set>
      </condition>
    </done>
  </search>
  <search>
    <query>
      index=def_mfg (source=nonconformance OR source=incomplete_task OR source=part_not_installed) earliest=-35d@d
        ([| loadjob $index_base$
        | stats values(pwo) as pwo])
      | regex open_work_center="^(--REDACTED--)$"
      | sort 0 -_time
      | dedup issue_id
      | search status="OPEN"
      | stats count(eval(source="nonconformance")) as nonconformances count(eval(source="incomplete_task")) as incomplete_tasks count(eval(source="part_not_installed" AND reason="SHORT")) as shorts list(eval(if(source="part_not_installed" AND reason="SHORT",part_number,null()))) as shorts_list count(eval(source="part_not_installed" AND NOT reason="SHORT")) as parts_not_installed by pwo
    </query>
    <done>
      <condition>
        <set token="defect_base">$job.sid$</set>
      </condition>
    </done>
  </search>
  <search>
    <query>
      | loadjob $index_base$
      | append
        [| loadjob $status_base$]
      | append
        [| loadjob $punch_base$]
      | append
        [| loadjob $defect_base$]
      | stats values(start) as ml_start values(capsule_status) as capsule_status values(offline_time) as capsule_offline values(main_line_status) as main_line_status values(work_center) as last_punch_work_center values(_time) as last_punch_time values(nonconformances) as nonconformances values(incomplete_tasks) as incomplete_tasks values(shorts) as shorts values(parts_not_installed) as parts_not_installed list(shorts_list) as shorts_list by pwo
      | where NOT (main_line_status >--REDACTED-- OR last_punch_time<relative_time(now(),"-24h"))
      | eval last_punch_time=strftime(last_punch_time,"%c"),
        ml_start=strftime(ml_start+--REDACTED--,"%m/%d/%y"),
        capsule_offline=strftime(capsule_offline, "%c"),
        _work_center=substr(last_punch_work_center,2)
      | lookup truck pwo output vehicle_serial
      | table vehicle_serial pwo ml_start capsule_status capsule_offline main_line_status last_punch_work_center last_punch_time nonconformances incomplete_tasks shorts parts_not_installed shorts_list _work_center
      | sort 0 -last_punch_work_center last_punch_time
    </query>
    <done>
      <condition>
        <set token="panel_base">$job.sid$</set>
      </condition>
    </done>
  </search>
  <row>
    <panel>
      <title>Completed --REDACTED-- in Queue for --REDACTED--</title>
      <single>
        <search>
          <query>
            | loadjob $panel_base$
            | search (NOT last_punch_work_center=*) OR last_punch_work_center="" OR _work_center<--REDACTED-- OR (NOT main_line_status>=--REDACTED--)
            | stats count
          </query>
        </search>
      </single>
    </panel>
    <panel>
      <title>Open DPU on Queue --REDACTED--</title>
      <single>
        <search>
          <query>
            | loadjob $panel_base$
            | search (NOT last_punch_work_center=*) OR last_punch_work_center="" OR _work_center<--REDACTED-- OR (NOT main_line_status>=--REDACTED--)
            | stats sum(nonconformances) as nonconformances sum(incomplete_tasks) as incomplete_tasks count as capsules
            | eval dpu=(nonconformances+incomplete_tasks)/capsules
            | fields dpu
          </query>
        </search>
        <option name="numberPrecision">0.00</option>
        <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option>
      </single>
    </panel>
    <panel>
      <title>--REDACTED-- Dropped on --REDACTED-- - Currently on --REDACTED--</title>
      <single>
        <search>
          <query>
            | loadjob $panel_base$
            | search _work_center>=--REDACTED--
            | stats count
          </query>
        </search>
      </single>
    </panel>
    <panel>
      <title>Open --REDACTED--</title>
      <single>
        <search>
          <query>
            | loadjob $panel_base$
            | search _work_center>=--REDACTED--
            | stats sum(nonconformances) as nonconformances sum(incomplete_tasks) as incomplete_tasks count as capsules
            | eval dpu=(nonconformances+incomplete_tasks)/capsules
            | fields dpu
          </query>
        </search>
        <option name="numberPrecision">0.00</option>
        <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option>
      </single>
    </panel>
  </row>
  <row>
    <panel>
      <table>
        <search>
          <query>
            | loadjob $panel_base$
            | rename ml_start as "--REDACTED-- Scheduled Start" capsule_status as "Capsule Status" capsule_offline as "Capsule Offline Time" main_line_status as "Main Line Status" last_punch_work_center as "Last --REDACTED-- Punch" last_punch_time as "Last Punch Time" nonconformances as Nonconformances incomplete_tasks as "Incomplete Tasks" shorts as Shorts parts_not_installed as "Non-Short Missing Parts" shorts_list as "List of Shorts" pwo as PWO vehicle_serial as Serial
          </query>
        </search>
        <option name="count">100</option>
      </table>
    </panel>
  </row>
</dashboard>
0 Karma
1 Solution

cmerriman
Super Champion

You shouldn't have a problem using this dashboard on a SHC. I just created a dashboard on a cluster with the same logic you listed above and it worked just fine. You're creating tokens with the SID to create the loadjob. I think the documentation might need some clarification - I could be wrong though. I believe the ad-hoc searches are not replicated and therefore you cannot grab the results via SID on other SHs with loadjob. However, you're doing it all in one dashboard, obviously all on the same SH. I think it should be fine.

View solution in original post

cmerriman
Super Champion

You shouldn't have a problem using this dashboard on a SHC. I just created a dashboard on a cluster with the same logic you listed above and it worked just fine. You're creating tokens with the SID to create the loadjob. I think the documentation might need some clarification - I could be wrong though. I believe the ad-hoc searches are not replicated and therefore you cannot grab the results via SID on other SHs with loadjob. However, you're doing it all in one dashboard, obviously all on the same SH. I think it should be fine.

Tedesco1
Path Finder

Thank you very much for the response- that makes sense to me, and I had a hunch that might be the case, but I need to be sure. I will open a support ticket and ask for clarification on the documentation.

0 Karma

cmerriman
Super Champion

at the bottom of the docs page you should just be able to leave feedback. the docs team is extremely responsive and they'll be able to fix/clarify the docs and look into the issue to make sure the docs are accurate.

Tedesco1
Path Finder

I did not know I could do that, thank you! I've never scrolled that far down on any of the docs pages.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...