Splunk Search

Why is loadjob retruning null?

dezmadi
Path Finder

I have below query as query returning  null

 

<search id="dfLatencyOverallProcessingDelayBaseSearch">
<query>index="deng03-cis-dev-audit" | eval serviceName = mvindex(split(index, "-"), 1)."-".mvindex(split(host, "-"), 2) |search "data.labels.activity_type_name"="ViolationOpenEventv1" |spath PATH=data.labels.verbose_message output=verbose_message |
where verbose_message like "%overall_processing_delay%Dataflow Job labels%" | eval error=case(like(verbose_message,"%is above the threshold of 60.000%"), "warning", like(verbose_message,"%is above the threshold of 300.000%"), "failure") </query>
<earliest>$time.earliest$</earliest>
<latest>$time.latest$</latest>
<sampleRatio>1</sampleRatio>
<done>
<condition>
<set token="dfLatencyOverallProcessingDelay_sid">$job.sid$</set>
</condition>
</done>
</search>

Then

SomeQuery.append [ loadjob $dfLatencyOverallProcessingDelay_sid$ | eval alertName = "Dataflow-Latency-Overall processing high delay" | stats values(alertName) as AlertName values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount]

If result from dfLatencyOverallProcessingDelay_sid are null, then AlertName is also coming as blank, I want this to be  "Dataflow-Latency-Overall processing high delay"

0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

Try something like this

append [ loadjob $dfLatencyOverallProcessingDelay_sid$ | eval alertName = "Dataflow-Latency-Overall processing high delay" | stats values(alertName) as AlertName values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount | appendpipe [stats count as nullcount | where nullcount = 0 | eval alertName = "Dataflow-Latency-Overall processing high delay"]]

View solution in original post

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Try something like this

append [ loadjob $dfLatencyOverallProcessingDelay_sid$ | eval alertName = "Dataflow-Latency-Overall processing high delay" | stats values(alertName) as AlertName values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount | appendpipe [stats count as nullcount | where nullcount = 0 | eval alertName = "Dataflow-Latency-Overall processing high delay"]]
0 Karma
Get Updates on the Splunk Community!

Best Strategies to Optimize Observability Costs

 Join us on Tuesday, May 6, 2025, at 11 AM PDT / 2 PM EDT for an insightful session on optimizing ...

Fueling your curiosity with new Splunk ILT and eLearning courses

At Splunk Education, we’re driven by curiosity—both ours and yours! That’s why we’re committed to delivering ...

Splunk AI Assistant for SPL 1.1.0 | Now Personalized to Your Environment for Greater ...

Splunk AI Assistant for SPL has transformed how users interact with Splunk, making it easier than ever to ...