I want to report on emails containing subjects that are using difference character sets, such as Chinese, Russian, Greek alphabet, etc.
Is there an easy way to pull out the character encoding from the emails?
eg:
Sender: someone@somewhere.com
Sender: 你好,世界
I found a List of all languages here :
https://www.regular-expressions.info/unicode.html#prop scroll down to Unicode Scripts and Unicode Blocks.
You could use [^\p{Latin}]
, since everything you are looking for is non latin?!
I think thats the closest you can get, by using the rex above
I found a List of all languages here :
https://www.regular-expressions.info/unicode.html#prop scroll down to Unicode Scripts and Unicode Blocks.
You could use [^\p{Latin}]
, since everything you are looking for is non latin?!
I think thats the closest you can get, by using the rex above
Would be great if you could accept and upvote the question, thank you 🙂
HI,
its a bit bulky, but wouldn´t it be working if you use your regex to find everything except chracters you have in your charset?
like
[^:.@,\s+\w+]
This is matching the chinese characters
OR
And at least for chinese there is a method to match all chracters with \p{Han}
. Seems to work in splunk.
| makeresults | eval aaa="世界" | rex field=aaa (?<my_aaa>\p{Han}.*)
That was an interesting approach I hadn't considered. The problem I am finding is that there seem to be lots of edge cases, such as é, í, !, ", £, $, %, ... I keep going back and finding more to exclude.
Not sure if this is the best way to do it, or if theres an alternate approach?