I did quite a bit of Googling and making it all one partition is certainly what the vast majority are doing these days. I like the idea of not having to crystal-ball partition sizes, and for a desktop, file server, or database server on bare metal I would also have done it that way. I welcome challenges to my reasoning for partitioning because I'm obviously not in the majority, and recent for me is FreeBSD 7.2, LOL! My reasons, from a VMware perspective, for my partitioning decisions were these:Partitioning is good if you have many different raid arrays. If you use one, there's no reason to restrict yourself
1. /swap I just made it 4GB. If processes are actively swapping due to lack of memory and causing a massive number of memory page faults, I've never gotten FreeBSD, at least with 7.2, to use over 2GB, even under even unrealistic test conditions when we ran the SLC SSDs. It just plain wouldn't do it. With ESXi, adding memory amounts to clicking a spin box.
2. /tmp I didn't go with tmpfs for memory reasons. I made it 1GB and purge on reboot.
3. /var/tmp I know people often symlink to /tmp or use tmpfs, but in FreeBSD it is intended for /var/tmp to survive reboot. I made it 1GB as in the past.
4. /var with DA on FreeBSD doesn't require much of a crystal ball because mail folders and user databases are under /home. Logs are the only things that could cause things to get out of control, but I need to set up processes to trim them back in any scenario. The fixed size makes sure I do. Other than that, there are root Apache, a few web apps, a few configs, the DNS database, ClamAV, pkg, etc., but nothing other than logs that grows much that I know of. dumpdev is set to auto, and default, so it will only catch kernel memory.
5. /usr requires a little more more of a crystal ball, and for that reason I set up /usr and /home last so that I can expand them if necessary. I've never had to expand /usr even when set at 10GB. The worst case scenario I've used was 67%, so I went with 15GB. Separating /usr from /home means the apps can work if /home fills up.
6. /home, defined last, has the remainder of the virtual drive. /home is the only volume I ever find running on the edge, and twice going over the edge. I've only had to expand it one time in the past. Users seldom use their quota, so for efficient use of space, it is overbooked. It also means that a few pack rats can cause a problem. The only time I had to do it, I simply went into the VM's settings, clicked the spin box to extend the virtual drive to the size needed, made some careful manual edits in FreeBSD, and expanded /home.
7. Unlike bare metal, with ESXi and its normal thin provisioning, the sizes of volumes are simply maximum sizes. The only physical space used is what is actually used, or has been used. Thus, we overbook ESXi's physical disk space with VMs as well. With a minor amount of work, I can get the no-longer-used space back also, but I only tend to do that after some big change. If I ever have a problem with a partition in the middle of the pack, with a few mouse clicks, I could make a new virtual hard drive and move it there, either temporarily or permanently. The only thing another virtual hard drive does is add two files to my backups. With the need to expand a partition being very infrequent, and with /home on the end, I never have had to add another drive. The only place I current do that is where I have a multi-TB backup volume that I do not want backed up. While ESXi cost us almost exactly 25% in CPU cores over bare metal under max loads, with 7.2, UFS->VMFS gives way better disk performance than UFS on bare metal. (which my disbelief and curiosity caused me to waste considerable time attempting to prove otherwise) I went with MBR because many VM tools do not work with GPT, and according to other VMware users, MBR is certainly not slower than GPT, and the ability to do V2P with GPT, and have it boot afterwards, is not a given.
Using ESXi and the normal thin provisioning does impact the reasoning for running everything off /, but maybe there are other advantages that I am not aware of. One could make the argument that I could have been extravagant with space and have it made no real impact, and I would never run out of space. However, size limits are also an advantage in some cases, as is the case with /tmp, /var/tmp, and arguably /var. In addition, during some VM manipulation operations, the VMs are are expanded to their full size. A common time that happens is when shrinking VMs to reclaim previously used space, such as after a major upgrade. The unused space is filled with zeros thereby expanding the VM to its maximum size before copying it back to thin and becoming much smaller. Irrelevant factors that influenced my decisions were the old VM was 80G, /home was ~95% with 57G. I settled on 128G for the VHD size (on a 2TB VMFS partition), I guess because they make physical hard drives that size, and this partition layout results in an even 100G /home. I'm certainly open to better reasoned partition strategies in the context of a thin provisioned environment, and I already feel a little stupid with my decisions for /usr and /home. Since they are VMs, I can throw them away, and make another at will, with no requirements for additional hardware or impact on production VMs.