Splunk Search

fields in subsearch not showing all results?

stwong
Communicator

Hi all,

I tried to find log entries of same mail using queue id from sendmail log. However, for the same time span, following search gives different results. e.g.

Gives all records at all time:
source="/tmp/sendmail.txt" from="<userA@my.domain.hk>" | fields qid | reverse

Only returns part of the records, with those at earlier time slots are missing:
source="/tmp/sendmail.txt" [search source="/tmp/sendmail.txt" from="<userA@my.domain.hk>" | fields qid ]| reverse

Would anyone please help?

Thanks and rgds,
/ST Wong

1 Solution

gcato
Contributor

Hi stwong,

If you run is the following search is the result greater than 10,000?

source="/tmp/sendmail.txt" from="<userA@my.domain.hk>" |stats count

If so, you are hitting the default subsearch limit of 10,000 results.

http://docs.splunk.com/Documentation/Splunk/6.3.0/Search/Aboutsubsearches
*..., by default subsearches return a maximum of 10,000 results and have a maximum runtime of 60 seconds. *

This can be increased in the limits.conf file.

Hope this helps.

View solution in original post

stwong
Communicator

Thanks for all of you. I'm busy with other tasks and will try it out next week. Thanks again.

0 Karma

HeinzWaescher
Motivator

You could try out

source="/tmp/sendmail.txt" [search source="/tmp/sendmail.txt" from="" | fields qid | format ]| reverse

as workaround to escape the limit

0 Karma

woodcock
Esteemed Legend

As others have mentioned, you are hitting the subsearch limit. You can use the approach listed in this Q&A to escape the limit but you will have to split your work across several searches each of which writes to disk and then assemble all the results together at the end.

https://answers.splunk.com/answers/318428/how-can-i-escape-the-50k-subsearch-limit-while-lin.html

0 Karma

stwong
Communicator

Hello, thanks for your advice. i'm newbie to Splunk and doesn't aware of appendpipe. From your post and document, seems i can break up searches like following. Did i interpret it correctly?

...| appendpipe [search part 1] |appendpipe [search part2]| ... | appendpipe [search part N ] | stat ..

Will try it out.
Anyway, this method assumes the input is broken up into parts. While mail transaction composes of multiple log entries. Breaking up the log files into parts (e.g. by no. of lines, date/time, etc.) may make log of same transaction logs span across multiple files and thus can't get the complete transaction information, especially on a busy mail server. I've to think about some good ways to split the file to avoid this ...

Thanks a lot.

0 Karma

woodcock
Esteemed Legend

You cannot use the search directly; you would have to run each search separately and finish with outputcsv somefile and then use the solution mentioned to suck all the data back in from files.

0 Karma

stwong
Communicator

Hi all,

Besides maxout, I also increased maxopentxn and maxopenevents.

Anyway, I got following when I set maxout to 100000:

[subsearch]: Subsearch produced 50000 results, truncating to maxout 50000.

Is this a hard limit?

Thanks a nd regards

0 Karma

sideview
SplunkTrust
SplunkTrust

I'm not sure if this is a more recent change, but in the limits.conf docs it says that the maxout value for [subsearch] cannot be higher than 10500. Thus apparently you can set it lower than it's default of 10,000 but only slightly higher.

http://docs.splunk.com/Documentation/Splunk/6.3.0/Admin/Limitsconf

So in the end the answer will be to get the same results without a subsearch (see my answer for an alternate method)

0 Karma

gcato
Contributor

Hi stwong,

If you run is the following search is the result greater than 10,000?

source="/tmp/sendmail.txt" from="<userA@my.domain.hk>" |stats count

If so, you are hitting the default subsearch limit of 10,000 results.

http://docs.splunk.com/Documentation/Splunk/6.3.0/Search/Aboutsubsearches
*..., by default subsearches return a maximum of 10,000 results and have a maximum runtime of 60 seconds. *

This can be increased in the limits.conf file.

Hope this helps.

stwong
Communicator

Thanks for your help. It's around 4x,xxx. Besides subsearch, seems there is also similar limit for transaction?

Thanks again.

0 Karma

woodcock
Esteemed Legend

The limit for transaction is worse because it cannot be qualified. Whenever it "runs out of memory", it just returns the results that it has at that moment with out any error or other indicator of being truncated.

0 Karma

sideview
SplunkTrust
SplunkTrust

Are there over 10,500 events returned by the first search?

If so then your problem is something inherent in subsearches - they will only go that deep into the subsearch. So the first search will get all of the events from that source from that user, but the second search will only get the events that have the qid values found in the most recent 10,000 events from the same source and user.

UPDATE: I see why you moved on to trying the transaction command, although as discussed that command has somewhat similar limitations. If you want to get the set of events defined as
all messages (regardless of their values of "user") that contain any of the various qid values that have appeared in the events that have from="<userA@my.domain.hk>"

There are ways to do this without being subject to any memory limits, an approach using the streamstats command might work better.

source="/tmp/sendmail.txt" from="" | streamstats values(user) as users by qid | where user=="<userA@my.domain.hk>"

In some cases where the cardinality of the categorical fields is very large, streamstats can itself run into some strange behavior that is probably related to high memory usage. To date however I've only seen that behavior when using streamstats' more arcane arguments window and global.

stwong
Communicator

Hello, right, there are over 10,500 events returned by the first search. I increased maxout and the result looks better. Seems there is also limit for transaction ?

Thanks a lot for your help.

0 Karma

sideview
SplunkTrust
SplunkTrust

If you're hitting maxout limits, beware of simply increasing maxout. I would not do it here necessarily. Maxout is there for a reason. Splunk makes it look easy to manipulate massive data sets but some search language does still force splunk to have large amounts of data in memory at one time on one instance. This is one of those times. Ultimately increasing maxout is a slippery slope and using suboptimal search language in cases where other search language is more efficient, may lead you to a) poor performance or even b) quite genuine oom problems.

sideview
SplunkTrust
SplunkTrust

addendum - it seems the subsearch maxout value cannot be increased past a hard limit of 10500. http://docs.splunk.com/Documentation/Splunk/6.3.0/Admin/Limitsconf

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

&#x1f342; Fall into November with a fresh lineup of Community Office Hours, Tech Talks, and Webinars we’ve ...

Transform your security operations with Splunk Enterprise Security

Hi Splunk Community, Splunk Platform has set a great foundation for your security operations. With the ...

Splunk Admins and App Developers | Earn a $35 gift card!

Splunk, in collaboration with ESG (Enterprise Strategy Group) by TechTarget, is excited to announce a ...