This depends on the source of the data.
First, the forwarder will buffer data in the it's output queue. If load balancing is available, the forwarder will switch indexers. An admin can also create a persistant (disk based) queue and it's size. Once these are full the forwarder will stop loading inputs and they react as follows.
For file or folder monitors, splunk will just keep a bookmark. Once the queue empties it continues where it left off. No data loss.
For TCP inputs splunk has an input queue and can have another persistent queue. It will fill the queues (memory first then disk) and finally stops collecting from the TCP stack. The servers TCP stack can then signal the sending device to stop sending. This can prevent data loss if the sending device handles it correctly.
For UDP, splunk reacts the same as TCP but when it stops taking data from the UDP stack, there is no way to tell the sending device so it will still send the data and it will get lost. This is why best practice is to use a system syslog receiver.
For scripted inputs the admin can also set up a queue and persistant queue. When the forwarder can't send to the indexer it lets running scripts fill these queues and stops new instances of the script running. Depending on the script, data may be lost or not.
For windows event monitoring, splunk stops collecting. Windows itself keeps a large buffer and it is not unusual to have a years data so splunk should recover without loss.
For windows performance monitoring, splunk stops requesting it so data will not be collected.
I don't have the full information for other input types.
Heavy forwarders behave exactly the same as Universal ones in this respect apart from they have internal queue's used during parsing. In theory this can effectively increase the buffer size but I wouldn't rely on it. Persistent queue is better.