[SOLVED] Will it Proxmox?

inxsible

Active Member
Feb 6, 2020
139
8
38
I have a 1U Supermicro server in a SC813MTQ chassis. It houses a X9SCL-F board with Xeon E3-1240 processor that I currently use as a VMWare ESXi host. I want to move over to Proxmox for 3 reasons:
  • I just get a very alien feeling with VMWare cli and even finding something in the documentation
  • I am quite comfortable with Linux, having used it over the last 16+ years.
  • I do love using open source software
Enter Proxmox !!!

I currently run ESXi via a USB key on the back of the server with the 4x500GB Constellation drives serving as a raided datastore via a LSI 9260-4i. Yeah, this is a HW raid only -- can't flash it to IT mode. I have researched that bit. And it's a hassle to sell this card and then buy a proper HBA. I have this card and it works, so I don't intend to install Proxmox in a ZFS based array.

In any case my problem with the switchover is two-fold question.
  1. Can Proxmox be run off of an USB drive -- I saw threads where some said it's possible and others said it's not recommended. But what if the /var partitions were moved over to one of the HDDs -- just like ESXi creates a /scratch partition on the HDD when installing to an USB ?
  2. If 1 is not possible or if the USB just dies too frequently in Proxmox as opposed to ESXi -- then can I install Proxmox in 1 partition of the HDD but still utilize the rest of the HDD for the datastore? I assume this should be possible at least since we do this all the time with Linux installs and since Proxmox is based on Debian.
Installing to a SSD is not possible because of the 1U size. I tried fitting 2.5" SSD next to the motherboard, but it stays on top of the board by 3-4 mm which I am not comfortable with. Apart from that I would have to fit the molex to SATA power adapter cables in that tight space and the cables would still hover over the board and block the air flow in an already tight space. So putting in a couple of SSDs is also not a practical option.


I found these 2 threads for Question 2: https://forum.proxmox.com/threads/use-same-hard-drive-for-proxmox-and-data-partition.48749/
https://forum.proxmox.com/threads/how-to-mount-second-partition-or-disk.13215/

so it seems that it is possible, but are there any caveats that I should be aware of?

Your answers will help me in either ditching ESXi and moving over to Proxmox or continue with ESXi -- just the way it is -- at least until I upgrade to a larger server.


Thanks for your time
 
I had a similar problem. Too less sata ports. I put a msata SSD into an usb housing and attached it via a small 90° cable to the USB-A at the board.
This is running fine since nearly two years in my cluster as bootdrive with the Proxmox installation. It works in another installation with a m.2 sata USB housing too. A small solution if size matters. In the meantime m.2 SSD are easier to buy.

Not the fasted thing during updates, but ok. And you loose any features of the SSD like TRIM and statistic data like wear out or temperature. USB does not support many things nice to have.

I have no problems, but you have to decide if this a way for you.
 
I run one of my nodes (dell r510) from a usb but haven't quite figured out how to get the bios to boot consistently from it. The h700 raid controller that I am yet to get replaced with a sas adapter (the one I bought was 2mm too long to fit in the same slot as the h700) seems to want to always be considered first and no matter how I set the default boot orders it fails and I have to hit F11 and select the usb stick. After that it seems to do fine. I would like to see some more tutorials on getting a caching ssd drive set up etc etc. I just threw the installer at the computer and picked the usb option to save a drive bay but I can't really put the server anywhere that I don't have a keyboard and screen on it in case it ever gets restarted.
 
I run one of my nodes (dell r510) from a usb but haven't quite figured out how to get the bios to boot consistently from it. The h700 raid controller that I am yet to get replaced with a sas adapter (the one I bought was 2mm too long to fit in the same slot as the h700) seems to want to always be considered first and no matter how I set the default boot orders it fails and I have to hit F11 and select the usb stick. After that it seems to do fine. I would like to see some more tutorials on getting a caching ssd drive set up etc etc. I just threw the installer at the computer and picked the usb option to save a drive bay but I can't really put the server anywhere that I don't have a keyboard and screen on it in case it ever gets restarted.
How long have you used your USB for the boot drive? Did you move the /var or any other partitions off of the USB and onto some other HDD/SSD? Did you take any precautions so that the USB would last longer -- or has the USB lasted in spite of having the entire install on the USB stick?

I haven't had any problems running ESXi via the USB even on reboots. Even if I did, IPMI comes in handy during those times.
 
Last edited:
How long have you used your USB for the boot drive? Did you move the /var or any other partitions off of the USB and onto some other HDD/SSD? Did you take any precautions so that the USB would last longer -- or has the USB lasted in spite of having the entire install on the USB stick?

I haven't had any problems running ESXi via the USB even on reboots. Even if I did, IPMI comes in handy during those times.

Nothing in particular. It's a little 32GB ultra which has exsi on it previously. I'm not in a production environment and since it's all about training I actually don't mind things going wrong from time to time so I can experience how HA failover etc happens and what it takes to recover. To my understanding, booting from a USB is fine as long as you're not swapping or writing frequently to the drive. I believe the correct way is to have 2 ssd drives running the os so you have redundancy at that level. There is a second USB slot as well but I've never seen any kind fo mirroring for redundancy between sub drives.
 
Thanks @kkjensen.

I am currently 70% in installing proxmox to a 16G USB drive. However it is taking unusually long for the installation. It's been over 30 mins or so -- which makes me think, it's probably not good. ESXi installed itself in less than 3-4 mins and apparently Proxmox should take about the same as per the installation page.

The advantage of Proxmox is that you can partition the HDD for the install and then re-use the rest of the HDD for the datastore which ESXi didn't allow, as far as I remember. So I am going to perform a second install immediately following this and get rid of the USBs and install on my main array of drives. Also I don't need the redundancy requirement at the OS level since it's just a home server and if it goes down for some time where I have to re-install Proxmox and import my datastores back in -- it's just not a big deal.

I have 4x500GB in a RAID6 which gives me 930GB available space. I plan on giving 20Gs for the Proxmox install and then the remaining 910GB can be used for the datastore. I also plan on adding a couple of NFS datastores -- which will hold the ISO files for the various distros that I plan to eventually use. (Archlinux, Debian, CentOS).

The only problem is that if the RAID card dies in the future, I will probably lose the boot OS and the datastore together. But I'll deal with that when we get there.
 
I may have to shut down and dd my install to a bigger drive or find a way to move some stuff off of it. I used a 8GB drive and by the time all was partitioned (using default settings) I only have 24% space left which makes some parts of the setup complain (like ceph throwing an error about a drive that is uncomfortably full)
 
My experience (have been there at "ESXi" land as well): ESXi is working far better with USB disks as Proxmox is doing. Even when bringing /var/log to a tempfs etc...

I was using the same 4GB SMI USB-disk for Proxmox and it started to show issues (not booting, filesystem read only) very soon after my migration. The ESXi stick was replaced only upon upgrades (to keep the old version).

Thought it is a bad stick, so I got some decent 8GB USB sticks (USB3, fast, reliable) but after about 6-9 months I was in the same place as well.
Read-Only filesystem, system erratically behaving, etc. A reboot fixed my issues most of the time (every 1-2 weeks). Sometimes a e2fsck was necessary. But it happened as well the installation was "up in smoke". Not too bad - had an image...

Long story short. I read a bit about differences on USB-sticks and SSDs. Conclusion was: USB-Sticks work a lot different (in terms of wear-leveling mainly not being existent) and I choose to get myself a small SSD to boot from. Did cost me 8€ - yes, 8! And I wished I had done this earlier...
It is an 8GB APACER industrial SSD and since then, everything works like a charme. It boots fast, it upgrades fast. All fine.

And regarding size. 4GB is a bit small indeed. 8GB is ok, when you "clean up" after the system, e.g. remove unneeded kernels, modules, etc.

check the disk usage with "du" command. I found that /etc/modules was eating a lot of space...
I created a script that can clean up after updates etc. If interested, I am happy to share.
 
I think your findings would make a great wiki write-up!

My experience (have been there at "ESXi" land as well): ESXi is working far better with USB disks as Proxmox is doing. Even when bringing /var/log to a tempfs etc...

I was using the same 4GB SMI USB-disk for Proxmox and it started to show issues (not booting, filesystem read only) very soon after my migration. The ESXi stick was replaced only upon upgrades (to keep the old version).

Thought it is a bad stick, so I got some decent 8GB USB sticks (USB3, fast, reliable) but after about 6-9 months I was in the same place as well.
Read-Only filesystem, system erratically behaving, etc. A reboot fixed my issues most of the time (every 1-2 weeks). Sometimes a e2fsck was necessary. But it happened as well the installation was "up in smoke". Not too bad - had an image...

Long story short. I read a bit about differences on USB-sticks and SSDs. Conclusion was: USB-Sticks work a lot different (in terms of wear-leveling mainly not being existent) and I choose to get myself a small SSD to boot from. Did cost me 8€ - yes, 8! And I wished I had done this earlier...
It is an 8GB APACER industrial SSD and since then, everything works like a charme. It boots fast, it upgrades fast. All fine.

And regarding size. 4GB is a bit small indeed. 8GB is ok, when you "clean up" after the system, e.g. remove unneeded kernels, modules, etc.

check the disk usage with "du" command. I found that /etc/modules was eating a lot of space...
I created a script that can clean up after updates etc. If interested, I am happy to share.
 
I ended up re-installing proxmox on the HDD array itself. Partitioned it manually to give Proxmox root 15GB, a swap of 8GB and the rest (about 907GB) as a lvm-thin disk to hold my vm disks. I know I might lose the vmdisks if Proxmox or the RAID card dies but since it's just a home server, I am ok with the risk.

I shared a couple of NAS exports via NFS to store the isos and container templates and I usually mount my NAS exports in my VMs and containers, so I can backup the important stuff quickly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!