The Proxmox documentation mentions:
However, I am only specifying a DB device - Proxmox then automatically puts the WAL on the same device. (Earlier thread discussing this).
If I only specify "db_size" - what will pveceph pick for the WAL size?
Or how exactly do I check what the DB/WAL size...
Hi,
I have a 4-node Proxmox cluster
Each node has:
1 x 512GB M.2 SSD (for Proxmox)
1 x Intel Optane SSD (895 GB) for Ceph WAL/DB
6 x Intel SATA SSD (1.75. TiB) for Ceph OSDs
I am trying to setup OSDs on the SATA SSDs, using the Optane as WAL/DB drive.
However, when I get to the 5th drive...
Hi,
I'm trying to setup a new Proxmox/Ceph cluster, using Intel S3610 SSDs for the OSDs, and Intel Optane 905P's for the WAL/DB disk.
I'm using the commands from the documentation here.
However, if I try to put both the the DB and WAL on the Optane disk, I get the following error:
# pveceph...
Is anybody able to help with the above questions, in help deciphering tasks output?
In particular, I'm stuck on how to get the friendly name for clones (as they appear in the GUI), or friendly names for disks?
And - is there any interest in getting some kind of dashboard, or export of this...
I'm trying to create a realtime dashboard of the number of running/stopped VMs on a Proxmox cluster.
What is the easiest way of doing this?
qm list seems to be one option:
root@foo-vm01:~# qm list --full true
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100...
Thanks wolfgang and spirit for the pointer! =)
The issue was the rbdname - I needed to point it to an actual RBD volume.
The client name is just the Ceph username (e.g. "admin"). I assume fio must use a default of admin, as it seems to work without it (and I assume Proxmox creates the user...
I have a new Proxmox cluster setup, with Ceph setup as well.
I have created my OSDs, and my Ceph pool.
I'm now trying to use fio with ioengine=rbd to benchmark the setup, based on some of the examples here.
However, it doesn't appear to be working on Proxmox's Ceph setup out of the box:
#...
I'm connecting the ISO through a Raritan Dominion KX3 KVM - this goes via USB but I believe it exposes it as an optical drive.
When set to UEFI - it simply does not show that as a bootable option.
Aha, the issue is - if I set it to UEFI from the start - then the SuperMicro doesn't seem to boot from the ISO.
If it's in DUAL - is there some way to force the Proxmox installer to go to UEFI mode?
I have a SuperMicro 1029P-WTR, and I have just installed Proxmox 6.1 on it.
The boot disk is a M.2 NVMe SSD (Team MP34).
I chose to install on ZFS (RAID0) on this disk.
I had boot mode previously set to DUAL, but I've changed it to UEFI after the install (SuperMicro won't seem to boot from...
To answer this question - see here:
https://forum.proxmox.com/threads/how-do-you-tag-a-interface-in-proxmox-with-a-vlan.61173/
You can create different virtual network interfaces in Linux, each one a different VLAN, then assign them to the Corosync/Ceph networks when you run the wizard.
Yes, I saw the 6.1 release notes.
I believe you're referring to it now migrating VMs when you intentionally shut-down a host.
However, unless I'm mis-reading the feature, this isn't the same as auto-scheduling of VMs.
Many modern hypervisors have a scheduling policy for clusters - where when...
Hi,
I'm setting up a new 4-node Promox/Ceph HA cluster using 100Gb networking.
Each node will have a single 100Gb link. (Later on, we may look at a second 100Gb link for redundancy).
Previously, we were using 4 x 10Gb links per node:
1 x 10Gb for VM traffic and management
1 x 10Gb for...
Hi,
I have a running MacOS Mojave VM running on Proxmox (per this guide).
However, if my local machine is running Linux - how do I sent a Command key (⌘) through to the VM, using the noVNC client?
Thanks,
Victor
From reading online - I think
https://community.mellanox.com/s/article/howto-set-dell-poweredge-r730-bios-parameters-to-support-sr-iov
However, I'm not sure if the R630 supports IOMMU?
Anyhow - the use-case for this was to run ntopNG in a VM - I wanted to pass through a NIC, with one of...
Is there normally a separate setting for IOMMU
There's a setting for VT-d (default to on). That is currently on.
And there's a setting for SR-IOV (default to off). I've enabled that.
I did see another setting for "X2APIC" mode that is currently disabled. Is that related at all to IOMMU?
Hi,
I am trying to do PCI Passthrough per the wiki article:
https://pve.proxmox.com/wiki/Pci_passthrough
I am running Proxmox 6.0 on a Dell R630 server, with an Intel E5-2696 v4 CPU, and two X520-DA2 NICs.
I have added the following to /etc/defaults/grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet...
Thanks Stoiko for getting back! =)
However, *both* ports are plugged into the same NIC card. (Intel X520-DA2, I believe)
(The machine is remote - it's actually on another continent, lol - I'm in Australia, machine is in US).
Can I still use PCI passthrough in that case?
I assumed I'd still...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.