Prior to updating to Splunk Enterprise 8.0.2 scheduled accelerated reports ran extremely fast:
Report A
Duration: 37.166
Record count: 314
After updating to Splunk Enterprise 8.0.2 the report ran extremely slow:
Report A
Duration: 418.621
Record count: 300
Given the patch notes for 8.0.2 – I'm not seeing any changes to acceleration or summary indexing, so is it safe to assume this is a fluke?
The massive increase in report generation (job) time of the scheduled accelerated reports appears to be caused by them no longer accessing the corresponding report acceleration summary. The "Access Count" never goes up when the scheduled reports are run.
Guess we'll wait for 8.0.3 to fix this.
Troubleshooting steps attempted:
Manually rebuild Report Acceleration Summaries
Delete all affected Report Acceleration Summaries
Delete and recreate affected production reports – recreated schedule and checked box for acceleration
Check filesystem permissions of inputlookup csv - confirmed -rw-rw-r-- splunk splunk
Hi codebuilder, Ran your check with different time/dates but it shows no results. I am also logged in as the admin with full permissions. The datamodel is set to global and same settings as other datamodels.
Is your datamodel accelerated? if so, that my be an issue as well.
Try adding "summariesonly=false" to you search (you may need to look up the exact syntax).
This will include data that has not been accelerated, but will degrade search performance.
If you still get nothing, check permissions on your datamodel. It may be restricted to a user, role, or app.
yep accelerated, and set to all apps, with everyone having read permissions.
tried the summary command and again no results
I could be wrong obviously, but to me that indicates you search context is off, or the datamodel does not have the correct permissions.
Ensure that your test searches are using the correct context, and verify your dm is not restricted to a specific role/user/app. (e.g. clone the dm and give wide open permissions, then test again).
Likely you are searching from an app context that does not have access to the datamodel.
Not all searches are the same. The generic Splunk "search" does not necessarily have access to a datamodel created under the context of a Splunk app. Try changing the URL in your browser to include the name of your app (if applicable).
Also, the user/role/context of your search may or may not have access to the underlying index of the datamodel.
An additional check would be to use tstats.
|tstats count where datamodel=your_dm_name by index
e.g.
Slowly expand your time/date range, if you get no results, then you know that either the user or role or context does not have permissions to the index or datamodel, or you're in the wrong search context.
@smelf1 please post the DM search and the DM tree structure.
BASE SEARCH
index=cloudflare sourcetype=cloudflare:json
Check the permissions on your datamodel. Ensure it is available to the app, users, roles, etc.