Say that you have 3 servers, all of them running the full Splunk installation (i.e. not forwarders). You configure two of them to listen for incoming traffic, these will be your 'indexers'. The third will be your 'search head', i.e. where you log in and perform your searches.
From the Manager page on the Search head, you configure the indexers to be your 'search peers'. This means that when you perform a search on the Search head, the query will be sent to the peers, and they return the results to the Search head, where they are presented as graphs, lists, tables etc.
If you work with Splunk Forwarders to get data in, you configure these to loadbalance between the indexers, so that the log data gets evenly distributed across your indexers. Load balancing is the default behaviour of Forwarders - you just need to define more than one destination indexer in your Forwarder configuration.
This is a slightly simplified version of the setup, and it's not that very different compared to clusters. What cluster add is that data is replicated between indexers, so that if one of them goes down, the data is still available. With a non-clustered distributed setup, if an indexer goes offline, the data stored there will be unavailable until it is back up again.
You should probably read up a little on the docs, and this is a good place to start;