for example I want to upload a log file to splunk using universal forwarder. But in that log file there is a lot of log data I don't want to use and I don't want to put it on splunk, I can process it in UF or on splunk.
my end goal is to parse logs to json file to draw dashboard on splunk
without knowing your source log format it's impossible to give you to exact answer.
In general case there are some options how to drop unneeded data from events and even change its format. Normally this is done on first full splunk instance from UF to Indexers. Quite often it's one of indexers, but time to time it is HF (heavy forwarder). Most common way is use props.conf and transforms.conf files to select what you want keep and what you want to drop. Here is link to splunk documentation how to do it
If you want change "normal" log event to json, then I propose that you use some external tool to do it and maybe send it to Splunk HEC or use script / modular input to generate it. But if you have normal kv (key value paired) log events or you can do log onboarding correctly, then it's not needed to change it's format to json to utilise it on splunk.
Basically you should create own TA/App to your splunk indexer/HF/SH + DS/source system for this. There is some old articles like https://community.splunk.com/t5/Getting-Data-In/How-to-monitor-OpenVPN-connections-for-a-standalone-... (I'm not sure if this is still valid?) which may help you to collect openvpn logs.
Personally I will do that onboarding phase on my own test instance and then add those definitions into correct places when I'm happy with results.
If you haven't done that earlier I strongly propose Splunk's Data Admin course and other needed prerequirements and/or ask help from your local Splunk Partner to show you how this should do.