One often-cited reason for wanting to go with large, dense nodes is trying drive down the cost of the hardware purchase. This is often a false economy as dense nodes tend to require premium parts that often end up costing more per GB than less dense nodes.
For example, a 12-disk HDD node may only require a single quad processor to provide enough CPU resources for the OSDs. A 60-bay enclosure may require dual 10-core processors or greater, which are a lot more expensive per GHz provided. You may also need larger DIMMs, which demand a premium and perhaps even increased numbers of 10 G or faster NICs.
The bulk of the cost of the hardware will be made up of the CPUs, memory, networking, and disks. As we have seen, all of these hardware resource requirements scale linearly with the number and size of the disks. The only way in which larger nodes may have an advantage is the fact that they require fewer motherboards and power supplies, which is not a large part of the overall cost.
When looking at SSD only clusters, the higher performance of SSDs dictates the use of more powerful CPUs and greatly increased bandwidth requirements. It would certainly not be a good idea to deploy a single node with 60 SSDs, as the required CPU resource would either be impossible to provide or likely cost prohibitive. The use of 1-U and 2-U servers with either 10 or 24 bays will likely provide a sweet spot in terms of cost against either performance or capacity, depending on the use case of the cluster.