How to limit the size of search jobs in the jobs manager ?

New Member

Hi All,

I have the following search command in my sheduled report:

index=linux_log mountedon | dedup host, mountedon  | fields host, mountedon | map search="search index=linux_log mountedon=$$mountedon$$ host=$$host$$ earliest=-6m@m | timechart avg(use) as use, values(host) as host, values(mountedon) as mountedon span=60m | predict use as predict future_timespan="4000" | stats latest(predict) as predict, values(host) as host, values(mountedon) as mountedon, latest(_time) as predicttime  |  table host, mountedon, predicttime, predict" maxsearches=100000000000000000000000000000 | eval c_predicttime=strftime(predicttime,"%d-%m-%y %H:%M")  | table host, mountedon, predict, c_predicttime  | sort - predict | where predict > 95 | count

This job takes a long time to finish but when I look in the jobs manager, I see that this job has taken moren then 30MB for a simple count. Is there a way to limit this?

0 Karma

Super Champion

i see in the map command, you're using earliest=-6m@m but in the timechart you're using span=60m. I'm curious if that is a typo on one or the other, because you wouldn't need to span 60 minutes in a timechart if you're only looking 6 minutes back. also, at the end of your map command, you don't really need the table command, because those fields are all the fields in your stats command and will automatically generate in a table. Also, do you really need 100000000000000000000000000000 searches?

You could also flip your where and sort statements so that you aren't sorting on so many lines.

Last, but not least, | count does not do anything, you need | stats count in front of it to work if you're trying to count how many rows where greater than 95, in which case you really didn't need the sort at all.

0 Karma