While Splunk uses zlib for compression internally, that not something made available via commands out of the box.
That said, it does make sense to decompress the data before indexing (as a pre-process) since on the whole it will ALL be compressed again through the indexing process, using the same methodology that you use.
All indexed data is stored as compressed data (and usually sits on disk taking up 30%-70% less room than the raw data).
The other option is for you and yours to create a command that will take input (a field, in line) and run it through a decompression using zlib in a python script. you can read about that here feeding the output back to Splunk where you can use it.
You have not mentioned any specifics regarding why your data "arrives in pieces due to various size limitations", so it's difficult to say whether these suggestions are viable for you.
The least complicated solution would be to create a scripted input (in python, if you like) that decompresses the data as it feeds it to the indexer. (which will, in turn compress and make it available to you simultaneously)
With Splunk... the answer is always "YES!". It just might require more regex than you're prepared for!