Like with any deployment, start with a proof-of-concept testbed.
As you know, separate OS from data. So, use small drives to mirror the OS. I use ZFS RAID-1. Rest of bigger drives for VMs/data. Strongly suggest to use a IT/HBA-mode storage...
Just get a used Dell HBA330 storage controller. Flash latest firmware and Proxmox will use the simpler mpt3sas driver.
I run it in production with both ZFS and Ceph with no issues.
Best practice is to separate OS and data storage.
So, install OS on small boot drives mirrored. I use ZFS RAID-1. Then rest of drives for VMs/data.
Your future self will thank you.
You can use either the GUI Datacenter option or the edit /etc/pve/datacenter.cfg and change the migration network, ie, migration: type=insecure,network=169.254.1.0/24
Obviously, you need a migration network for this to work. Also, if this is an...
Today I will be setting up a test environment with some much faster storage and networking. I'll be benchmarking the pure HTTP/2.0 method as well as the new netcat+dd method, and netcat+pigz+dd. The hardware should be closer to what you would...
I use Intel X550 and Intel X540 10GbE without issues in production. They are running latest firmware.
For 1GbE, I use Intel i350 with latest firmware also without issues.
I use Mellanox ConnectX-3 with latest firmware at home but obviously use...
Been migrating Dell servers at work to Proxmox from VMware.
While it's true the PERC RAID controller can be switched to HBA-mode, it uses the megaraid_sas driver which has cause issues in the past.
I decided to replace the PERC with a Dell...
In production at work, I use the following:
Intel X550
Intel X540
Intel i350
without issues with latest firmware.
At home, I use a Mellanox ConnectX-3 SFP+ 10GbE fiber NIC without issues also with latest firmware.
I run ZFS on standalone servers and Ceph on clustered servers.
Usually, on Dell servers, the write cache on hard drives is disabled because it is assumed they will be used on a BBU RAID controller.
Since, ZFS & Ceph don't play nice with RAID...
Nope.
All network is traffic going over single network cable per node as physically described (1 -> 2, 3; 2 -> 1, 3; 3 -> 1,2) at https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Introduction
So, yeah, you can say that is a...
I run a 3-node Proxmox Ceph cluster using a full-mesh broadcast network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup
Each node is directly connected to each other without a switch. Yes, since it's broadcast...
It was quite reproduceable in my production environment with Ceph migrations done every other week. Again, YMMV. All VMs are Linux with qemu-guest-agent installed.
Was only through trial-and-error, I found the cache policies that work for ZFS &...