small footnote to thread, to add to existing advice,
- for serious disk IO inside your VMs you absolutely must configure VirtIO to get better IO performance to disk for your VMs. Period. Not really up for discussion.... IDE "works" in the sense that it is easy, and functional, but definitely it is not optimal.
- VIrtIO is supported in modern linux guests since quite a few years. (I don't think you mentioned in your thread what OS your guest Vms are running?)
- note VirtIO also is fine on typical Windows guests (especillay nice in Vista/Win7/Server2008 and more recent - I find still some issues with 2003/XP and some VirtIO driver configs but ... hopefully most people are not doing new deployment of 2003 at this stage in its looming EOL cycle).
- basic gig to move from IDE to VitrtIO disk is easy, ie,
(a) schedule brief downtime for host
(b) attach a tiny VirtIO new disk to the host
(c) power off and power on the VM
(d) attach VirtIO ISO driver disc, available from KVM site, as per link in proxmox wiki support docs (if running windows guest)
(e) install VirtIO drivers if required in your guest OS (Windows)
(f) make sure new VirtIO disk is visible, shows up in device manager as "Redhat SCSI Controller"
(g) power off, detach and delete this temporary tiny disk which allowed you to force install of VirtIO drivers
(h) detach your REAL OS disk and then re-attach to VM but instead of IDE bus select VirtIO bus
(i) boot your VM and now it should boot and be using VirtIO Bus and drivers for boot volume. And magically disk performance inside this VM becomes much nicer.
End of the day though: I have a few testboxes for misc test work which are on bare SATA disk .. and the performance of these will never be great - as others have said, sata disk is not great once you have a lot of VMs grinding IO to the physical device. Possible workaround on modern versions of Proxmox might include
-- deploy with ZFS "software raid" pool and have many spindles - no hardware raid controller, but multiple SATA disks in your ZFS storage pool for proxVE VM storage. As such you get normal speedup attributed to "many spindles makes better performance" in a 'raid config'. ie, you can avoid spending $ on hardware raid card but you can't avoid spending money on multiple disks and a chassis in system that accomodate enough drives. But of course in this config setup work is a bit more but performance is better too. So .. clearly .. 8 x 500gig drives in a ZFS pool will yield better IO performance than a single 4Tb Sata disk even if the total disk space is not so different
-- note that Ceph is supported now as storage type, and it too can help you do multiple spindle bare disk no hardware raid storage pool (fault tolerance comes from multiple nodes / multiple disks / suitable levels of redundancy). But I am guessing for a testing box (solo box) you will have simpler setup with a single host to do a ZFS based storage pool with multiple SATA disks rather than getting into Ceph.
-- or -- you can always do an 'unsupported' proxmox install config, ie, setup Debian minimal on baremetal first; install on top of a SW Raid config that uses multiple spindles (ie, Raid10 volumes spanning 8-12 physical SATA disks for example?) and then custom add in proxVE install on top after. But ZFS would be the 'supported' way of doing this setup
Hope this meandering post is of some help maybe.
Tim