Splunk Search

Can a PDF table be more than 1000 rows?

adamsmith47
Communicator

I'm in the process of building some high-priority dashboards for my management (time critical), and I'm having a problem when I schedule the PDF for delivery. One of my tables has 1370 rows, but the PDF version stops at 1000 rows.

Following guides here: https://docs.splunk.com/Documentation/Splunk/6.6.11/Viz/DashboardPDFs#Additional_configurations_for_...

I discovered the defaults in limits.conf are:
[pdf]
max_mem_usage_mb = 200
max_rows_per_table = 1000
render_endpoint_timeout = 3600

I've changed them to the following, by pushing an app and restarting:
[pdf]
max_mem_usage_mb = 300
max_rows_per_table = 2000
render_endpoint_timeout = 3600

The table still stops at 1000 rows in the PDF.

Is this limitation not surpassable?

Any help is greatly apprecitated. Thank you.

0 Karma
1 Solution

niketn
Legend

@adamsmith47 I ran the following search in a dashboard for Yesterday data spanning every minute to generate 1440 rows and tested Scheduled PDF Delivery

index=_internal sourcetype=splunkd
| timechart span=1m count by component

With default limits I noticed only 1000 events in the test PDF generated.

So I created following limits.conf in system/local and I was able to see 1440 in the test mode Scheduled PDF Delivery

[pdf]
max_mem_usage_mb = 2048
max_rows_per_table = 2000
render_endpoint_timeout = 4200

So you need to check the following:

1) Use btool to confirm that your app's limits.conf for max_rows_per_table is getting picked. In case there already is a limits.conf in system/local that someone else has put then that will be used.
2) Either one of Max memory usage or Timeout is not happening.

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

View solution in original post

0 Karma

niketn
Legend

@adamsmith47 I ran the following search in a dashboard for Yesterday data spanning every minute to generate 1440 rows and tested Scheduled PDF Delivery

index=_internal sourcetype=splunkd
| timechart span=1m count by component

With default limits I noticed only 1000 events in the test PDF generated.

So I created following limits.conf in system/local and I was able to see 1440 in the test mode Scheduled PDF Delivery

[pdf]
max_mem_usage_mb = 2048
max_rows_per_table = 2000
render_endpoint_timeout = 4200

So you need to check the following:

1) Use btool to confirm that your app's limits.conf for max_rows_per_table is getting picked. In case there already is a limits.conf in system/local that someone else has put then that will be used.
2) Either one of Max memory usage or Timeout is not happening.

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

adamsmith47
Communicator

AH HA! Success!

Once I changed the limits.conf in system/local it worked. Which means the limits.conf [pdf] stanza is not being properly read by splunk in etc/apps, even though it's showing up in btool. Strange.

BUG ALERT!

0 Karma

niketn
Legend

@adamsmith47 I don't think this is bug, this is called out in Splunk Documentation that system/local will have higher precedence than app/local or app/default.

https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles#Precedenc...

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

adamsmith47
Communicator

But files in apps have a higher priority than those in system/default, obviously. There were no conflicts with these settings in other apps.

0 Karma

niketn
Legend

Within apps do you have any other app that also has this configuration override? Because then it will be in alphabetical order for apps? Did btool debug show that configuration was being picked from your app's local settings?

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

adamsmith47
Communicator

Yes, btool showed the settings I was attempting to implement, within the app I was attempting to implement them with.

0 Karma

jkat54
SplunkTrust
SplunkTrust

I believe that if you deploy limits.conf to MyApp/[default|local]/ the limits will only apply to users of MyApp unless you have something like export=system in your .meta files.

If you put limits.conf in etc/system/local they're global.

Most all of the .conf files i can think of work this way.

0 Karma

adamsmith47
Communicator

@niketnilay I've confirmed the limits with btool, but, I also upped the values to what you've had success with, and I still am limited to 1000 rows. Yes, Splunk has been restarted after changing values.

I created a different dashboard to load the 1440 rows quickly

<dashboard>
  <label>test 1400 rows with makeresults</label>
  <row>
    <panel>
      <title>1440 rows fast</title>
      <table>
        <search>
          <query>| makeresults count=1440
| streamstats count AS rows</query>
          <earliest>-24h@h</earliest>
          <latest>now</latest>
          <sampleRatio>1</sampleRatio>
        </search>
        <option name="count">50</option>
        <option name="dataOverlayMode">none</option>
        <option name="drilldown">none</option>
        <option name="percentagesRow">false</option>
        <option name="rowNumbers">true</option>
        <option name="totalsRow">false</option>
        <option name="wrap">true</option>
      </table>
    </panel>
  </row>
</dashboard>

When using either Export > Export PDF, or Export > Schedule PDF delivery the PDF is limited to 1000 rows.

Which version of Splunk are you using? I'm on 6.6.11.

Would you mind testing my exact XML code above?

Thanks.

0 Karma

adonio
Ultra Champion

curious about the use case that requires a report with 1370 lines table ...
if i read a line in half a second, itll take ~13 minutes to complete reading that report without repeating the same line again because my eyes or brain got tired.
there must (i think) be a better way

0 Karma

adamsmith47
Communicator

Can I down vote this comment?

0 Karma

adonio
Ultra Champion

@adamsmith47,
here are the guidelines for answers.splunk.com, and specifically regarding down-voting:
https://docs.splunk.com/Documentation/Splunkbase/splunkbase/Answers/Voting#Downvoting

Downvoting should be reserved only for posts proposing solutions that could potentially be harmful for a Splunk environment or goes completely against known best practices.

only asking a question with attempt to help, but its a free country, click where you want to click

adamsmith47
Communicator

The customer wants a table that big.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...