Hello.
We have a user who wants to receive some rather large reports daily. In one particular case, the search returns 208,000 results and that's definitely what the user wants (don't ask).
If I run the search from Splunk Web, I get 208K results as I'd expect. When we initially sent email either through a scheduled search or using "sendemail" in the search pipeline, output was limited to 10K via email which is as I'd expect with an unmodified Splunk configuration.
In the savedsearches.conf where this search lives, I added
action.email.maxresults = 500000
figuring that I'd leave some headroom there in case this report got larger. What was interesting now was that the output went from 10K results to 50K results. 50K not 500K -- so still 1/10th the total events. I bumped this number up yet a time or two and it made no difference.
It was my understanding that you only needed to change that specific scheduled search's stanza for this to take effect and clearly it did affect it, just not enough. I thought that modification of alert_actions.conf was needed only to set defaults for saved searches. But just in case, I added
[email]
maxresults = 500000
in $SPLUNK_HOME/etc/system/local/alert_actions.conf. After seeing no change, I also tried adding
command = $action.email.preprocess_results{default=""}$ | sendemail "results_link=$results.url$" "ssname=$name$" "graceful=$graceful{default=True}$" "trigger_time=$trigger_time$" maxinputs="$action.email.maxresults{default=50000}$" maxtime="$action.email.maxtime{default=5m}$" results_file="$results.file$"
to alert_actions.conf as I saw suggested in another Splunk Answers question. Still only 50K results.
I don't know exactly how large this file would be, but it can't be the email system rejecting the mail as if it were size, then the message would not be sent/delivered. And the only other way email could be involved is if the email system here would actively pop open a CSV attachment and remove some lines which doesn't seem likely.
Has anyone seen something like this and hopefully knows a remedy?
Splunk 6.5.3 in this case.
Thanks!
After playing around with this I was able to get over the 10k or 50k results. This required all 3 settings on the search head.
$SPLUNK_HOME/etc/system/local/limits.conf
[scheduler]
max_action_results = 175000
[searchresults]
maxresultrows = 175000
$SPLUNK_HOME/etc/system/local/alert_actions.conf
[default]
maxresults = 175000
this enables an email alert containg a .csv to have 175k rows
Note: When I pushed the same configs from deployer and they ended up in an app/default as it should, but my .csv was limited to 10k rows.. when i put it straight on $SPLUNK_HOME/etc/system/local via cli on each member I got 175k rows in the csv
In python.log we are hitting some local size limit imposed by the email server:
index=_internal source=*python.log* "Message exceeds local size limit"
The 552 error was generated by the receiving SMTP email server complaining that the email hit the size limit on the server. Every mail server has custom settings for the size of mail users can send and receive through it. These limits can either be global or individual-account specific or both.
In an Exchange server, there are mainly 4 settings for message size limitations:
a. Global value for the mail server
b. Receive and Send Connectors setting
c. SMTP Virtual Server setting
d. Individual user mailbox restriction
If the message size in the email exceeds any of these limits that are allowed for a particular email user account, it will be rejected with this error 552 message.
To resolve a valid email bouncing with error 552, these attachment limits have to be increased or adjusted appropriately.
You could contact the mail system admin to better understand limits for email attachments.
After playing around with this I was able to get over the 10k or 50k results. This required all 3 settings on the search head.
$SPLUNK_HOME/etc/system/local/limits.conf
[scheduler]
max_action_results = 175000
[searchresults]
maxresultrows = 175000
$SPLUNK_HOME/etc/system/local/alert_actions.conf
[default]
maxresults = 175000
this enables an email alert containg a .csv to have 175k rows
Note: When I pushed the same configs from deployer and they ended up in an app/default as it should, but my .csv was limited to 10k rows.. when i put it straight on $SPLUNK_HOME/etc/system/local via cli on each member I got 175k rows in the csv
These three settings did not raise the limit in our environment (8.2.2).
It appears that there may be a MB limit in addition to a #rows limit.
hi all,
i have no access to conf files as i have to admin access. i'm trying to do by using advanced edit. i have changed all maxresult value from 50000 to 500000 but not getting result more than 50000. what to do now?
I figured it was something along these lines. However, I tried these settings (bumped them up enough to handle 250,000 results) and I still get the same message -- at least when I don't schedule it and pipe it through 'sendemail'. Perhaps that involves some other magical settings in limits.conf.
I believe this job will run on its own tonight so we'll see what happens via the scheduler tonight.
Thanks
@mfrost8 are you all good ?
The major issue here (size of emailed scheduled search jobs) is resolved due to your limits.conf suggestions. I consider my question answered.
But as I noted in another comment, I would like to figure out what needs to be fixed to make the 'sendemail' command work from within the search bar as well. That's on me, I think.
My concern is that the descriptions of items in limits.conf don't lead to an obvious solution here. There are many keys with 'max' in them, but none of them were obviously about this issue and at this point, I don't quite know which would apply to the sendemail command. I need a decoder ring.
Thanks
OK, now click Accept
; all done here, right?
should be if this works for @mfrost8
um, the command= code you posted only asks for 50K results.
{default=50000}
Yeah, I see that now that you mention it. But as I said, I had tried this without that line (i.e. maxresults = 500000) before that and it didn't matter. And ultimately, I believe that what's in the saved search definition should override all that.
You can try using sendemail
in SPL to send the raw results in the body of the email (instead of CSV).
Thanks. I'd tried that previously as well. I thought I'd mentioned that above, but I just tried again
[ search ]
| sendemail to="me@domain.com" from="me@domain.com" inline=false maxinputs=1000000 sendcsv=true server=mysmtpserver subject="My Test" sendresults=true format=csv
As usual, I received the email with an attached CSV file. Still 50,000 results. The body of the email says:
Search results attached.
Attached csv results have been truncated
Dangit.
No, I am saying in the body
(which, now that I think about it, is an option in the Alert, too, but it may be truncated there), like this:
[ search ]
| sendemail to="me@domain.com" from="me@domain.com" inline=true maxinputs=1000000 sendcsv=true server=mysmtpserver subject="My Test" sendresults=true format=csv
Understood. I tried this and got essentially the same result. Only 50K results inline and a message at the top of the email that said that the results were truncated. That is, Splunk clearly knows there's more events than it's showing but is only display 50K of them for some reason.
It's not out of the realm of possibility that I'm just doing something wrong here that I'm missing or have a typo or something, but I'm not seeing it.
Thanks
This is interesting. For the sendemail commands I'd run to this point, I never got any error output. But using 'sendemail' and telling it to not use csv, I got
command="sendemail", (552, 'size limit exceeded', u'me@mydomain.com') while sending mail to: me@mydomain.com
which I'd never seen before. I saw another Splunk Answers post indicating to check the mail system (postfix in this case) configuration for maximum message size. I doubled that but it had no effect. Doesn't seem like that would be applicable here though.
From some other digging into that error message, at least the 552 part of it, it does seem like something needs to be changed in limits.conf, although the docs don't make it at all clear what.