I had to do this once for a huge CSV, but I also has to prune a few rows, and didn't need half the columns, (so not exactly the same as your requirement)
I actually opted to do this with a python scripted input, which allowed me to pre-process the file as it went in, and dumped to stdout as key=value pairs.
Once it was completed I disabled the input, but it meant I had the ability to run it again if ever needed.
I never did need it again, but even so, It was time well spent making sure the data was concise when it went in.
You can add the below configuration parameters in your limits.conf file of your app.
[kv] limit = maxcols =
The default values for limit is 100 and maxcols is 512. You should try indexing your csv by increasing the default values.
Link for your reference.
what configuration you have done to index this csv?
If connectivity and all is there then
add following command in your splunkforwarder using cli
./splunk add monitor
also on the indexer create the same index that you specified as index_name