Splunk Enterprise

Why are Licensing pages failing to render and shows error "Mako failed to render"?

nz_021
Explorer

Hi,

i have an 500 internal server error when accessed licensing  page, and this is show on the log:

2023-08-07 13:39:01,536 ERROR [64d09185547fa7f0295810] __init__:370 - Mako failed to render: 

Traceback (most recent call last):
  File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/__init__.py", line 366, in render_template
    return templateInstance.render(**template_args)
  File "/opt/splunk/lib/python3.7/site-packages/mako/template.py", line 476, in render
    return runtime._render(self, self.callable_, args, data)
  File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 883, in _render
    **_kwargs_for_callable(callable_, data)
  File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 920, in _render_context
    _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
  File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 947, in _exec_template
    callable_(context, *args, **kwargs)
  File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 15, in render_body
    <%self:render/>
  File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 21, in render_render
    <%self:pagedoc/>
  File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 95, in render_pagedoc
    <%next:body/>
  File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/admin_lite.html", line 92, in render_body
    ${next.body()}
  File "/opt/splunk/share/splunk/search_mrsparkle/templates/licensing/overview.html", line 209, in render_body
    % if hard_messages['cle_pool_over_quota'] is not None and hard_messages['cle_pool_over_quota']['count'] is not None and hard_messages['cle_pool_over_quota']['count'] >= stack_table[0]['max_violations']:
KeyError: 'cle_pool_over_quota'
 
 
any help for that?
Labels (1)
0 Karma

kaboom1
Explorer

Hello, I am having the same issue, and I understood that it could be due to a license violation. I just found out that even when a license peer can't connect to license manager for 72 hours it will cause warnings (I m not sure yet if for a single error message = a warning or not!!)

check your internal logs (if you still have them) for this:

index=_internal component=LMTracker ("failed to send rows" OR "unable to connect")

 

then i understood that the warning/violation will be reset after 30days (this period could be different too!), so I am assuming that after 30days of the violation, the issue will disappear! 

The other possibility is to change the file: /opt/splunk/share/splunk/search_mrsparkle/templates/licensing/overview.html
and comment the section where it checks for the pool quota violation (I haven't tried that yet).

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

If you cannot connect to LM from peer, you have 72h time to fix the situation. After that you cannot do normal searches before you fix it. There is no automatic timeout for getting it work again!

You must check that you have connection from your peer to LM usually it use port 8089. Also you must have same pass4SymmKey on generic stanza on peer and LM to get connection to work. 

If physical connection between host is working then just look from LM's _internal log what is the reason why it didn't accept peers connection.

r. Ismo

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...