Splunk ITSI

ITSI Import Objects - itsi_entity_name_normalizer: Why does it fail after entity volume gets too high?

travisakins
Engager

Documenting a solution built while working with ITSI 4.13 and CP for Monitoring and Alerting 1.5 on a Splunk 8.2.4 platform. 

The content pack for monitoring and alerting creates the itsi_entity_name_normalizer import job to ensure that every entity will get an alias called entity_name.  They use this alias for other searches in the logic created from the content pack so it is important that it runs as intended.  

When the number of entities grows the likelihood of this job failing increases.  Attempts to run the out-of-the-box spl on a larger entity pool will highlight a 414 error that the URI being too long since it looks at the entire pool every run.

To fix this issue we need to change the logic in the 'ITSI Import Objects - itsi_entity_name_normalizer' job so it will not run against entities which already have the entity_name alias.

Original SPL:

| inputlookup itsi_entities | eval entity_name=title

Updated SPL:

| inputlookup itsi_entities where NOT _itsi_identifier_lookups=entity_name*
| search retirable!=1
| eval entity_name=title
| eval entity_title=title
| head 5000

While the eval for entity_title is redundant it is useful when using the search for ad-hoc entity import cases as the UI will restrict us from mapping title to title.  Doesn't hurt to have it.  If others disagree please update as needed.  

Additionally, since we need to control the volume in each batch the head function gives us that flexibility.  We are protected in case we get a large influx of new entities.

This assumes you do not need the entity_name field continually overwritten every cycle.  I could not find a reason why it matters to be updated after reviewing the other knowledge items the content pack creates. 

Lastly, with the introduction of Entity Management Policies in ITIS 4.x we added an extra filter for entities without the retirable flag set.  If an entity is flagged to be retired we concluded it should be excluded from this job.  Likelihood an entity would qualify would be rare as that function represents the end of an entity lifecycle but no harm in having the extra check. 

Labels (2)
Tags (1)

jcunningham63
Loves-to-Learn Lots

@colbym , thank you for this workaround.  Using the original search also worked for me.  I recently upgraded to version 14.5.1 but it did not resolve the issue.  Hopefully, Splunk will resolve soon. 

0 Karma

colbym
Path Finder

This "fix" seems to introduce a new problem.    The Status for all these entities will become Unstable in the Infrastructure overview.  The status is determined by whether or not the initial imports complete successfully on subsequent cycles.  Since this normalizer will only run the first time, and skip all entities on subsequent runs, only the main import will show as completed and the status goes to unstable.   In the example below you can see that the windows import completed "just now" but the normalizer import shows the epoch time of last Friday, when I had deleted all my entities and let the imports start fresh.  After the first run, all the entities were green and active, but as soon as the next cycle ran, they all went to Unstable.  

 

["combined=unstable", "itsi import objects - windows os hourly entity import=just_now", "itsi import objects - itsi_entity_name_normalizer=1663960530.1300445"]

0 Karma

colbym
Path Finder

Update -  I reverted to the old version of the normalizer import and it fixed the issue of entities going to "unstable".  After consecutive runs of the imports all my entities are staying in Active status. 

As mentioned in last post I believe this is because once an import runs against an entity, it has to successfully run again at every cycle.  Since the entities are not returned by the updated version that excludes entities that already have a title, they become unstable. 

0 Karma

yannK
Splunk Employee
Splunk Employee

Great wokaround for large number of entities.

0 Karma
Get Updates on the Splunk Community!

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...

Alerting Best Practices: How to Create Good Detectors

At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as ...

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...