I'm in the process of building some high-priority dashboards for my management (time critical), and I'm having a problem when I schedule the PDF for delivery. One of my tables has 1370 rows, but the PDF version stops at 1000 rows.
Following guides here: https://docs.splunk.com/Documentation/Splunk/6.6.11/Viz/DashboardPDFs#Additional_configurations_for_...
I discovered the defaults in limits.conf are:
[pdf]
max_mem_usage_mb = 200
max_rows_per_table = 1000
render_endpoint_timeout = 3600
I've changed them to the following, by pushing an app and restarting:
[pdf]
max_mem_usage_mb = 300
max_rows_per_table = 2000
render_endpoint_timeout = 3600
The table still stops at 1000 rows in the PDF.
Is this limitation not surpassable?
Any help is greatly apprecitated. Thank you.
@adamsmith47 I ran the following search in a dashboard for Yesterday
data spanning every minute to generate 1440
rows and tested Scheduled PDF Delivery
index=_internal sourcetype=splunkd
| timechart span=1m count by component
With default limits I noticed only 1000
events in the test PDF generated.
So I created following limits.conf
in system/local
and I was able to see 1440
in the test mode Scheduled PDF Delivery
[pdf]
max_mem_usage_mb = 2048
max_rows_per_table = 2000
render_endpoint_timeout = 4200
So you need to check the following:
1) Use btool to confirm that your app's limits.conf for max_rows_per_table
is getting picked. In case there already is a limits.conf in system/local that someone else has put then that will be used.
2) Either one of Max memory usage or Timeout is not happening.
@adamsmith47 I ran the following search in a dashboard for Yesterday
data spanning every minute to generate 1440
rows and tested Scheduled PDF Delivery
index=_internal sourcetype=splunkd
| timechart span=1m count by component
With default limits I noticed only 1000
events in the test PDF generated.
So I created following limits.conf
in system/local
and I was able to see 1440
in the test mode Scheduled PDF Delivery
[pdf]
max_mem_usage_mb = 2048
max_rows_per_table = 2000
render_endpoint_timeout = 4200
So you need to check the following:
1) Use btool to confirm that your app's limits.conf for max_rows_per_table
is getting picked. In case there already is a limits.conf in system/local that someone else has put then that will be used.
2) Either one of Max memory usage or Timeout is not happening.
AH HA! Success!
Once I changed the limits.conf in system/local it worked. Which means the limits.conf [pdf] stanza is not being properly read by splunk in etc/apps, even though it's showing up in btool. Strange.
BUG ALERT!
@adamsmith47 I don't think this is bug, this is called out in Splunk Documentation that system/local will have higher precedence than app/local or app/default.
But files in apps have a higher priority than those in system/default, obviously. There were no conflicts with these settings in other apps.
Within apps do you have any other app that also has this configuration override? Because then it will be in alphabetical order for apps? Did btool debug
show that configuration was being picked from your app's local
settings?
Yes, btool showed the settings I was attempting to implement, within the app I was attempting to implement them with.
I believe that if you deploy limits.conf to MyApp/[default|local]/ the limits will only apply to users of MyApp unless you have something like export=system in your .meta files.
If you put limits.conf in etc/system/local they're global.
Most all of the .conf files i can think of work this way.
@niketnilay I've confirmed the limits with btool, but, I also upped the values to what you've had success with, and I still am limited to 1000 rows. Yes, Splunk has been restarted after changing values.
I created a different dashboard to load the 1440 rows quickly
<dashboard>
<label>test 1400 rows with makeresults</label>
<row>
<panel>
<title>1440 rows fast</title>
<table>
<search>
<query>| makeresults count=1440
| streamstats count AS rows</query>
<earliest>-24h@h</earliest>
<latest>now</latest>
<sampleRatio>1</sampleRatio>
</search>
<option name="count">50</option>
<option name="dataOverlayMode">none</option>
<option name="drilldown">none</option>
<option name="percentagesRow">false</option>
<option name="rowNumbers">true</option>
<option name="totalsRow">false</option>
<option name="wrap">true</option>
</table>
</panel>
</row>
</dashboard>
When using either Export > Export PDF, or Export > Schedule PDF delivery the PDF is limited to 1000 rows.
Which version of Splunk are you using? I'm on 6.6.11.
Would you mind testing my exact XML code above?
Thanks.
curious about the use case that requires a report with 1370 lines table ...
if i read a line in half a second, itll take ~13 minutes to complete reading that report without repeating the same line again because my eyes or brain got tired.
there must (i think) be a better way
Can I down vote this comment?
@adamsmith47,
here are the guidelines for answers.splunk.com, and specifically regarding down-voting:
https://docs.splunk.com/Documentation/Splunkbase/splunkbase/Answers/Voting#Downvoting
Downvoting should be reserved only for posts proposing solutions that could potentially be harmful for a Splunk environment or goes completely against known best practices.
only asking a question with attempt to help, but its a free country, click where you want to click
The customer wants a table that big.