Install /root on usb

haux

Active Member
Jun 5, 2017
13
0
41
42
Hi,

Has someone successfully installed Proxmox on a usb and used internal disks (SSD or HDD) as storage ?

I tried but it did fail.

Since I have one SSD for my home box, I want to dedicate 100% of the ssd's space to VMs

Thank you.
 
I do not recommend it. Low-end usb-sticks (~90%) die within a few months if you use them for os-disk. Some time ago I was using a high-end usb-stick (with SLC-nands), and it died suddenly after one year, after a few hundred r/w-cycles...
 
Hi all,

Thank you for the answers, that make sens but can't help me with my setup:

I'm getting, the next week, the exact same SSD disk, so I'll have two SSD drive for my proxmox box and I want to setup them as RAID1(I don't have a hardware RAID card).

Any advise on how I can use the two SSDs as RAID1 and installing on it both the OS and create a big partition for the VMs ? My understanding is that the bios can load from one Disk, it doesn't understand a software RAID1 (I don't have a physical RAID card). This is why I thought of using a usb for the OS in the first place ( unfortunately I can't afford a third disk for the OS :-( )

Any advise will be welcome.


Thank you.
 
Now I'm not sure what your problem is. You *can* install proxmox on zfs/raid1. Installer takes care of bootloader (will be on both disks) and the only thing you have to do in bios is to set up boot order so that both disk1 and disk2 are in it (does not matter which order).
 
Yea.. smart, just it'll take all the space for root partition, proxmox doesn't allow to slice and dice the zfs (RAID01) partition.
At least I'd like to avoid having the VMs in the same partition as the OS files :-(
 
Yea.. smart, just it'll take all the space for root partition, proxmox doesn't allow to slice and dice the zfs (RAID01) partition.
At least I'd like to avoid having the VMs in the same partition as the OS files :-(

Please read more about ZFS, which is a volume manager and a filesystem at the same time. The concepts of partitions and such are not applicable to ZFS, because all filesystems in a zpool share the same space. You can assign quota and reserve space to your root partition, such that the VM data is truly separated.
 
Actually - he COULD use a USB, if it was one of these:
https://www.sandisk.com/home/usb-flash/extremepro-usb

another option to use usb would be a usb > sata adaptor or dock, or drive bay, with a 2.5 ssd.

Of course one could, but they'll also fail. Everything fails, so without any redundancy you will have data loss and in terms of the operating system, you'll loose your running service and need to get everything back.
 
in that spirit many (ie dell) servers come with dual SD slots to run ESXi...... so certainly we could use the same principal with 2 of the mentioned usb ssd in a zraid or mdadm raid... or make an occasional manual clone of the usb and keep a cold spare on the shelf - even better than raid. you could have a 3rd usb (ie normal flash drive), where you rsync the pve confs to nightly, this would be a solid solution assuming his install is not mission critical needing 5x 9s... if that were the case you should always run real raid on a real raid card with raid1 or raid10 of high quality ssd or sas drives..

being that he only mentions 2x ssd, most likely he is not running any high load mission critical applications.
 
in that spirit many (ie dell) servers come with dual SD slots to run ESXi...... so certainly we could use the same principal with 2 of the mentioned usb ssd in a zraid or mdadm raid... or make an occasional manual clone of the usb and keep a cold spare on the shelf - even better than raid. you could have a 3rd usb (ie normal flash drive), where you rsync the pve confs to nightly, this would be a solid solution assuming his install is not mission critical needing 5x 9s... if that were the case you should always run real raid on a real raid card with raid1 or raid10 of high quality ssd or sas drives..

Yes, not wrong but much more complicated than just use ZFS on your storage including the PVE ROOT disk.

most likely he is not running any high load mission critical applications.

Obviously not, but the question was about a home server, not a wide-area cluster with distributed flash-only and replicated SAN storage.
 
Personally I have a couple machines with pve os and vmdata on same medium.... but I always try to avoid that scenario, for instance what if today I am using zfs for pve-zsync options, but tomorrow decide zfs is too slow and want to move up to raw devices on lvm? If my os is separate, I have the freedom and flexibility to re-provision my data storage without loosing the base OS or even rebooting, and could all be done without console access.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!