Hi,
I believe I've seen similar behaviour in the past (~Dec-2008) when I tried to install ProxVE (or even stock CentOS 5 for that matter) onto a raid array with a single lun of >2tb capacity.
I don't think this is a distro-specific bug per se, but rather a 'feature' of bootloaders used with most linux distros. I believe things get messy with installation onto a boot volume disk (ie, large raid volume) of >2tb capacity.
With the 3ware raid based host I had this problem on, the solution was to 'carve' out a smaller LUN (~10-100gigs, as per your needs) which is presented as a unique 'disk' (ie, /dev/sda) and can be happily used by the OS as a bootable volume. The remaining space on the raid volume was carved to a second LUN (of approx ~2Tb capacity) and this was presented by the raid controller as a second unique 'disk' (ie, /dev/sdb).
I believe ProxVE will take care of the fiddley details of LVM, etc. to make use of all space available (ie, from both disk luns presented) - but if you find it does something foolish as part of the default auto-install (like only use your first ~100gig /dev/sda volume) -- I suspect you could always manually reconfigure LVM to assign the data storage volume, /var/lib/vz , against your larger / second lun (~2tb /dev/sdb .. for example...)
I haven't worked with the exact raid controller you are experiencing this issue with, so I am not 'certain' you have this capacity to carve out multiple LUNs from the same raid volume - but this functionality is typically present in most hardware raid controllers I've worked with, so I'm optimistic you should be able to at least test this and see if it helps at all in your case.
I hope this helps / is of some use,
Tim Chipman
Fortech I.T. Solutions
http://FortechITSolutions.ca