Splunk Search

Converting multiple html entities in each field to unicode characters

Path Finder


 I have a dataset with very poor qulity and multiple encoding error. Some fields contain data like "Алексей" which sould be "Алексей". My first idea to convert taht, was to search every falty dataset and convert this extermally with a script but I'm curious if theres a better way using splunk. But I have no idea how to get there.

I somehow need to cet every &#(\d{4}); and I could facilitate printf("%c", \1) to get the correct unicode character but I have no Idea how to apply that to every occourance in a single field. Currently I have data like this:



Where I wanno get is that:



Any ideas if that is possible without using python sripts in splunk?



Labels (1)
0 Karma

Path Finder

Thanks. @bowesmana solution seems to work. But it seems, processing it externally is more efficient. many thanks to all.


Here's an example that extracts all the &#nnnn; sequences to a multivalue char field, which is then converted to the chars MV.

It seems like the mvdedup can be used to remove duplicates in each case and the ordering appears to be preserved between the to MVs, so then the final foreach will replace that inside name

| makeresults format=csv data="id,name
``` Extract all sequences ```
| rex field=name max_match=0 "\&#(?<char>\d{4});"
``` Create the char array ```
| eval chars=mvmap(char, printf("%c", char))
``` Remove duplicates from each MV - assumes ordering is preserved ```
| eval char=mvdedup(char), chars=mvdedup(chars)
``` Now replace each item ```
| eval c=0
| foreach chars mode=multivalue [ eval name=replace(name, "\&#".mvindex(char, c).";", <<ITEM>>), c=c+1 ]

You could make this a macro and pass a string to the macro and the macro could do the conversion.

Note that fixing the ingest is always the best option, but this can deal with any existing data.

Assumes you're running Splunk 9

Tags (1)


Here's a one-liner that handles both decimal and hexadecimal code points:

| eval name=mvjoin(mvmap(split(name, ";"), printf("%c", if(match(name, "^&#x"), tonumber(replace(name, "&#x", ""), 16), tonumber(replace(name, "&#", ""), 10)))), "")

You can also pad the value with XML tags and use the spath command:

| eval name="<name>".name."</name>"
| spath input=name path=name

or the xpath command:

| eval name="<name>".name."</name>"
| xpath outfield=name "/name" field=name

However, avoid the xpath command in this case. It's an external search command and requires creating a separate Python process to invoke $SPLUNK_HOME/etc/apps/search/bin/xpath.py.


spath works nicely, but the one liner only works if ALL the data is made up of codepoints only


Here's an alternative using rex and eval that should accommodate chars that aren't XML entities:

| rex field=name max_match=0 "(?<name>(?:&#[^;]+;|.))"
| eval name=mvjoin(mvmap(name, if(match(name, "^&#"), printf("%c", if(match(name, "^&#x"), tonumber(replace(name, "[&#x;]", ""), 16), tonumber(replace(name, "[&#;]", ""), 10))), name)), "")

If I were using any of these solutions myself, I'd choose spath. It should handle any valid XML without needing to handle edge cases in SPL.


Indeed, as it is in @bitnapper's original question. nullif, match, etc. could be added for input validation.

Get Updates on the Splunk Community!

Optimize Cloud Monitoring

  TECH TALKS Optimize Cloud Monitoring Tuesday, August 13, 2024  |  11:00AM–12:00PM PST   Register to ...

What's New in Splunk Cloud Platform 9.2.2403?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.2.2403! Analysts can ...

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...