OK, so I ended up setting a native VLAN on my switch, so that untagged traffic gets tagged with ID 12 (which is the VLAN for normal Proxmox traffic - 15 is for Ceph, 19 is for Corosync).
I noticed that there is the option to create a VLAN in the Proxmox GUI:
Anyhow, I have created my two...
I hit this same issue with a SuperMicro 2124BT-HNTR as well.
By default, the boot mode is set to "DUAL" - if you try to install Proxmox using ZFS, you will get this error on reboot:
However, if you set the boot mode to "UEFI" - and re-run the installation, it works.
The Proxmox documentation mentions:
However, I am only specifying a DB device - Proxmox then automatically puts the WAL on the same device. (Earlier thread discussing this).
If I only specify "db_size" - what will pveceph pick for the WAL size?
Or how exactly do I check what the DB/WAL size...
I have a 4-node Proxmox cluster
Each node has:
1 x 512GB M.2 SSD (for Proxmox)
1 x Intel Optane SSD (895 GB) for Ceph WAL/DB
6 x Intel SATA SSD (1.75. TiB) for Ceph OSDs
I am trying to setup OSDs on the SATA SSDs, using the Optane as WAL/DB drive.
However, when I get to the 5th drive...
I'm trying to setup a new Proxmox/Ceph cluster, using Intel S3610 SSDs for the OSDs, and Intel Optane 905P's for the WAL/DB disk.
I'm using the commands from the documentation here.
However, if I try to put both the the DB and WAL on the Optane disk, I get the following error:
Is anybody able to help with the above questions, in help deciphering tasks output?
In particular, I'm stuck on how to get the friendly name for clones (as they appear in the GUI), or friendly names for disks?
And - is there any interest in getting some kind of dashboard, or export of this...
I'm trying to create a realtime dashboard of the number of running/stopped VMs on a Proxmox cluster.
What is the easiest way of doing this?
qm list seems to be one option:
root@foo-vm01:~# qm list --full true
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
Thanks wolfgang and spirit for the pointer! =)
The issue was the rbdname - I needed to point it to an actual RBD volume.
The client name is just the Ceph username (e.g. "admin"). I assume fio must use a default of admin, as it seems to work without it (and I assume Proxmox creates the user...
I have a new Proxmox cluster setup, with Ceph setup as well.
I have created my OSDs, and my Ceph pool.
I'm now trying to use fio with ioengine=rbd to benchmark the setup, based on some of the examples here.
However, it doesn't appear to be working on Proxmox's Ceph setup out of the box:
I have a SuperMicro 1029P-WTR, and I have just installed Proxmox 6.1 on it.
The boot disk is a M.2 NVMe SSD (Team MP34).
I chose to install on ZFS (RAID0) on this disk.
I had boot mode previously set to DUAL, but I've changed it to UEFI after the install (SuperMicro won't seem to boot from...
To answer this question - see here:
You can create different virtual network interfaces in Linux, each one a different VLAN, then assign them to the Corosync/Ceph networks when you run the wizard.
Yes, I saw the 6.1 release notes.
I believe you're referring to it now migrating VMs when you intentionally shut-down a host.
However, unless I'm mis-reading the feature, this isn't the same as auto-scheduling of VMs.
Many modern hypervisors have a scheduling policy for clusters - where when...