Splunk Search

Why does the lack of subsearch results cause my search to fail with "Error in 'eval' command: Failed to parse the provided arguments."?

Lack of subsearch results causing query to error

I have a search that looks at historical data (using timewrap) and then compares it to the current day's data:

index=os* result=failed
| timechart count span=15m
| timewrap 1day
| tail 1
| fields 28days_before 21days_before 14days_before 7days_before
| transpose column_name=day
| rename "row 1" AS count
| head 4
| stats avg(count) as average stdev(count) as standard_deviation max(count) as hist_max
| eval today_fails=[ 
search index=os* result=failed earliest=-30m
| timechart span=15m count
| tail 1 
| return $count
]
| eval window_high=(average + standard_deviation)
| where today_fails > window_high

I'll break down the search now:

  1. Looks for authentication failure events
  2. Puts the data into a timechart using 15 minute intervals
  3. Uses timewrap to compare data day-to-day
  4. Removes all but latest 15 minutes of data
  5. We want to compare today's data to historical data from the same day of the week (Monday to Monday, Tuesday to Tuesday, etc.) This filters out the unnecessary data.
  6. Flips the table in anticipation of performing some calculations on the data
  7. Renames the field
  8. Filters out more unnecessary data from the transposed table
  9. Calculates the average, standard deviation, and historical maximum
  10. Creates a subsearch to pull in today's data
  11. Looks for authentication failure events over the past 30 minutes
  12. Puts the data into a timechart using 15 minute intervals
  13. Strips out all but the latest 15 minutes
  14. Returns the number of events found in the subsearch
  15. Close subsearch
  16. Sets the maximum threshold for alerting purposes
  17. Compares today's data against the maximum threshold returning results if today's is greater.

The search pulls the last 30 days of events and puts them in a timechart. It then uses the timewrap command to compare data day-to-day. Since we only want to look at historical data from the same day of the week, it filters out everything else.

Everything is fine so long as the subsearch can return actual events. However, there are plenty of instances where no authentication failures occurred in a given window. When that happens, I get the following error upon running the search:

Error in 'eval' command: Failed to parse the provided arguments. Usage: eval dest_key = expression
The search job has failed due to an error. You may be able view the job in the Job Inspector.

My question is, how can I adjust my subsearch so that it always returns a value? Either 0 or if events were found, whatever that number is.

0 Karma
1 Solution

SplunkTrust
SplunkTrust

Use this a your subsearch. It will add a row with count=0 if there are no results available before appendpipe)

search index=os* result=failed earliest=-30m
 | timechart span=15m count
 | tail 1 | appendpipe [| stats count | where count=0]
 | return $count

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

I am not sure why you are using timechart to bucket data into 30 min when you need latest 15m window. You can use stats instead. If you use stats count count should be returned as 0 as far as there are events in the 15m window (not necessarily the failed ones).

index=os* result=failed
| timechart count span=15m
| timewrap 1day
| tail 1
| fields 28days_before 21days_before 14days_before 7days_before
| transpose column_name=day
| rename "row 1" AS count
| head 4
| stats avg(count) as average stdev(count) as standard_deviation max(count) as hist_max
| eval today_fails=[ 
                     search index=os* result=failed earliest=-15m latest=now
                     | stats count 
                     | return $count
                   ]
| eval window_high=(average + standard_deviation)
| where today_fails > window_high

Another option is to break query into two part and use Search Event Handler to pass on the count to the main search. (PS: Search event Handlers done/progress can be used in 6.5 or higher. For older version you would need to use finalized/preview respectively)

Run the following search in your dashboard Simple XML (Refer to Null Search Swapper Example in Splunk 6.x Dashboard Examples App)

<search>
   <query>search index=os* result=failed earliest=-15m latest=now
| stats count
   </query>
   <done>
      <condition match=="$job.resultCount$==0">
         <set token="todayFails">0</set>
      </condition>
      <condition>
         <set token="todayFails">$result.count$</set>
      </condition>
   </done>
</search>

Then use the token $todayFails$ in your main query

index=os* result=failed
| timechart count span=15m
| timewrap 1day
| tail 1
| fields 28days_before 21days_before 14days_before 7days_before
| transpose column_name=day
| rename "row 1" AS count
| head 4
| stats avg(count) as average stdev(count) as standard_deviation max(count) as hist_max
| eval today_fails=$todayFails$
| eval window_high=(average + standard_deviation)
| where today_fails > window_high
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

SplunkTrust
SplunkTrust

Use this a your subsearch. It will add a row with count=0 if there are no results available before appendpipe)

search index=os* result=failed earliest=-30m
 | timechart span=15m count
 | tail 1 | appendpipe [| stats count | where count=0]
 | return $count

View solution in original post

0 Karma