Splunk does not recommend using the force_local_processing property unless if you’ve been advised to by someone at Splunk. Switching this property on will potentially increase the cpu and memory consumption of the universal forwarder.
The force_local_processing option, when set to true (set to false, by default) forces a universal forwarder to process all data tagged with a sourcetype locally before forwarding it to the indexers. Data with this sourcetype will be processed via the linebreaker, aggregator and the regexreplacement processors in addition to the existing utf8 processor.
Note that force_local_processing is applicable only on a universal forwarder.
After further debugging the reason for milliseconds not appear was very simple:
I was using a transforms.conf in the indexer with the format: DEST_KEY = _meta
Changed it to WRITE_META = true and all fine no need to force local processing anymore.
You may still read below for awareness...
Dear mglauser_splunk I recently had an issue parsing milliseconds on sourcetypes that my team had created and were not a default. Milliseconds did not got parsed at all. I tried a lot of setups on indexers and forwarders even changed date_time.xml and tried to train command (deprecated) with no sucess.
After trying for a couple of hours to get miliseconds parsed correctly the solution I found was to set on the Splunk Universal Forwarders property force_local_processing to true on
# 2019-02-01 11:02:13.178
# TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3N -> It was proven not needed as datetime.xml seems to cover it ???
# TIME_PREFIX=^\[\w*\s*\]\s -> It was proven not needed as datetime.xml seems to cover it ???
You need to restart Splunk Universal Forwarder to changes to take place: /opt/splunkforwarder/bin/splunk restart