Is anybody able to help with the above questions, in help deciphering tasks output?
In particular, I'm stuck on how to get the friendly name for clones (as they appear in the GUI), or friendly names for disks?
And - is there any interest in getting some kind of dashboard, or export of this...
I'm trying to create a realtime dashboard of the number of running/stopped VMs on a Proxmox cluster.
What is the easiest way of doing this?
qm list seems to be one option:
root@foo-vm01:~# qm list --full true
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
Thanks wolfgang and spirit for the pointer! =)
The issue was the rbdname - I needed to point it to an actual RBD volume.
The client name is just the Ceph username (e.g. "admin"). I assume fio must use a default of admin, as it seems to work without it (and I assume Proxmox creates the user...
I have a new Proxmox cluster setup, with Ceph setup as well.
I have created my OSDs, and my Ceph pool.
I'm now trying to use fio with ioengine=rbd to benchmark the setup, based on some of the examples here.
However, it doesn't appear to be working on Proxmox's Ceph setup out of the box:
I have a SuperMicro 1029P-WTR, and I have just installed Proxmox 6.1 on it.
The boot disk is a M.2 NVMe SSD (Team MP34).
I chose to install on ZFS (RAID0) on this disk.
I had boot mode previously set to DUAL, but I've changed it to UEFI after the install (SuperMicro won't seem to boot from...
To answer this question - see here:
You can create different virtual network interfaces in Linux, each one a different VLAN, then assign them to the Corosync/Ceph networks when you run the wizard.
Yes, I saw the 6.1 release notes.
I believe you're referring to it now migrating VMs when you intentionally shut-down a host.
However, unless I'm mis-reading the feature, this isn't the same as auto-scheduling of VMs.
Many modern hypervisors have a scheduling policy for clusters - where when...
I'm setting up a new 4-node Promox/Ceph HA cluster using 100Gb networking.
Each node will have a single 100Gb link. (Later on, we may look at a second 100Gb link for redundancy).
Previously, we were using 4 x 10Gb links per node:
1 x 10Gb for VM traffic and management
1 x 10Gb for...
I have a running MacOS Mojave VM running on Proxmox (per this guide).
However, if my local machine is running Linux - how do I sent a Command key (⌘) through to the VM, using the noVNC client?
From reading online - I think
However, I'm not sure if the R630 supports IOMMU?
Anyhow - the use-case for this was to run ntopNG in a VM - I wanted to pass through a NIC, with one of...
Is there normally a separate setting for IOMMU
There's a setting for VT-d (default to on). That is currently on.
And there's a setting for SR-IOV (default to off). I've enabled that.
I did see another setting for "X2APIC" mode that is currently disabled. Is that related at all to IOMMU?