Alright I modified my original response so that it is a bit more clear, but let me also expand as I think there are some other things we can clear up here for anyone else that might be reading this.
Regarding the disabled inputs, what did you run to obtain that output? Can you try the following since there are internal logs coming in, there is some monitoring going on somewhere:
$SPLUNK_HOME/bin/splunk cmd btool inputs monitor
As for the data you saw even though your indexer was down for a period of time, Splunk Universal Forwarders also have queues (http://docs.splunk.com/Documentation/Splunk/6.4.1/Deploy/Componentsofadistributedenvironment). The queueSize that you brought up is an in-memory queue. Since your indexer was down, the Universal Forwarder's Input queue was blocked up because data couldn't get to the indexer. When the indexer came back up (since your Universal Forwarder never went down) the data in-memory was able to get out and you never lost any data. In addition, new data that had never been sent started to flow into the input queue and played catch up for the data that had been missed during the indexer outage.
This is why Splunk has multiple time fields. One is an _indextime (the time the data was actually indexed) and the _time (the time of the event). When you look at the timeline in your image, you will notice that your data is making sense in chronological order. However the time field you don't typically look at is the index time. To visually see an example of this, the following search will show latency (difference) between indexing time and event time for your splunkd.log events. A high latency means that a specific event had a distinct difference between the actual event time and when it got indexed. I bet if you run the following search over that time frame, you'll see some high max latency indicating that your system was playing "catch up" for events it had missed.
index=_internal source=*splunkd.log host=**<universal_forwarder_hostname>** | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | eval time=_time | eval itime=_indextime | eval latency=(itime - time) |stats count, avg(latency), min(latency), max(latency) by host
Also as an added bonus, here is a nice conf presentation on some of the data pipeline stuff, https://conf.splunk.com/session/2014/conf2014_AmritBathJagKerai_Splunk_WhatsNew.pdf
... View more