There is a hardcoded 256K (262144) fields limit on S2S receiver. So the connection will be terminated. It's likely that HF/SH will retry sending same data again, thus blocking HF/SH permanently.
Check if there is any issue with the field extraction on FWD side. After all 256K fields are too many for an event.
Assuming you still need that event with 256K+ fields.
Here is what you do.
1. Move all props.conf/transforms.conf settings for the input source/sourcetype in question. ( Note ERROR log on indexing side provides source/sourcetype/host).
2. Add following config in the inputs.conf of the source stanza in question so that parsing is moved from HF/SH to IDX tier.
queue = indexQueue
Can you tell us more about this setting? The inputs.conf.spec file says setting the value to "indexQueue" sends data 'directly into the index', implying no parsing is done (is that even possible?). Under what conditions would we use indexQueue?
That might be a bit wrongly worded but it's used here for example
If you look at Masa diagrams you'll see which one the indexQueue is
By default splunktcp input routes events into different queues depending on which keys are present in the data so if the data is not parsed, it's getting into the parsingQueue and so on. Check the system/default/inputs.conf