Search results

  1. V

    Promox cluster keeps crashing - segfault in pmxcfs?

    Hi, I have a 4-node cluster running Proxmox/Ceph. In the last week - two of the nodes have gone down multiple times - each time, the nodes seems responsive - however, it disappears from the cluster. On the console I see a message about a segfault in pmxcfs Here is the output of pveversion...
  2. V

    VM Shutdown on PVE Host Reboot

    I have something similar - we have a 4-node Proxmox/Ceph cluster, and are not using HA currently. I need to reboot nodes for things like kernel updates. Each of the 4 nodes will have some running VMs, and some stopped VMs. How do I pause all running VMs, then have those ones automatically...
  3. V

    How do you view the current thin-provisioning ratio? (i.e. what % benefit it's giving you)

    HI, I have a four node hyperconverged Proxmox/Ceph cluster. Is there any way to view the current ratio of real to actual storage, for thin-provisioning? (I.e. I want to see what benefit thin-provisioning is giving me. currently) Thanks, Victor
  4. V

    How do you tag a interface in Proxmox with a VLAN?

    OK, so I ended up setting a native VLAN on my switch, so that untagged traffic gets tagged with ID 12 (which is the VLAN for normal Proxmox traffic - 15 is for Ceph, 19 is for Corosync). I noticed that there is the option to create a VLAN in the Proxmox GUI: Anyhow, I have created my two...
  5. V

    ZFS boot stuck at (initramfs), no root device specified

    I hit this same issue with a SuperMicro 2124BT-HNTR as well. By default, the boot mode is set to "DUAL" - if you try to install Proxmox using ZFS, you will get this error on reboot: However, if you set the boot mode to "UEFI" - and re-run the installation, it works.
  6. V

    WAL/DB smaller than default 10%/1%?

    The Proxmox documentation mentions: However, I am only specifying a DB device - Proxmox then automatically puts the WAL on the same device. (Earlier thread discussing this). If I only specify "db_size" - what will pveceph pick for the WAL size? Or how exactly do I check what the DB/WAL size...
  7. V

    WAL/DB smaller than default 10%/1%?

    Hi, I have a 4-node Proxmox cluster Each node has: 1 x 512GB M.2 SSD (for Proxmox) 1 x Intel Optane SSD (895 GB) for Ceph WAL/DB 6 x Intel SATA SSD (1.75. TiB) for Ceph OSDs I am trying to setup OSDs on the SATA SSDs, using the Optane as WAL/DB drive. However, when I get to the 5th drive...
  8. V

    How do you put Ceph DB and WAL on the same device?

    Hi, I'm trying to setup a new Proxmox/Ceph cluster, using Intel S3610 SSDs for the OSDs, and Intel Optane 905P's for the WAL/DB disk. I'm using the commands from the documentation here. However, if I try to put both the the DB and WAL on the Optane disk, I get the following error: # pveceph...
  9. V

    API to read QEMU VM creation time or uptime?

    Is anybody able to help with the above questions, in help deciphering tasks output? In particular, I'm stuck on how to get the friendly name for clones (as they appear in the GUI), or friendly names for disks? And - is there any interest in getting some kind of dashboard, or export of this...
  10. V

    Best way to get total number of running/stopped VMs for dashboard?

    I'm trying to create a realtime dashboard of the number of running/stopped VMs on a Proxmox cluster. What is the easiest way of doing this? qm list seems to be one option: root@foo-vm01:~# qm list --full true VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID 100...
  11. V

    fio with ioengine=rbd doesn't work with Proxmox's Ceph?

    Thanks wolfgang and spirit for the pointer! =) The issue was the rbdname - I needed to point it to an actual RBD volume. The client name is just the Ceph username (e.g. "admin"). I assume fio must use a default of admin, as it seems to work without it (and I assume Proxmox creates the user...
  12. V

    fio with ioengine=rbd doesn't work with Proxmox's Ceph?

    I have a new Proxmox cluster setup, with Ceph setup as well. I have created my OSDs, and my Ceph pool. I'm now trying to use fio with ioengine=rbd to benchmark the setup, based on some of the examples here. However, it doesn't appear to be working on Proxmox's Ceph setup out of the box: #...
  13. V

    Proxmox will not boot from ZFS (UEFI mode) - No root device specified

    I'm connecting the ISO through a Raritan Dominion KX3 KVM - this goes via USB but I believe it exposes it as an optical drive. When set to UEFI - it simply does not show that as a bootable option.
  14. V

    Proxmox will not boot from ZFS (UEFI mode) - No root device specified

    Aha, the issue is - if I set it to UEFI from the start - then the SuperMicro doesn't seem to boot from the ISO. If it's in DUAL - is there some way to force the Proxmox installer to go to UEFI mode?
  15. V

    Proxmox will not boot from ZFS (UEFI mode) - No root device specified

    I have a SuperMicro 1029P-WTR, and I have just installed Proxmox 6.1 on it. The boot disk is a M.2 NVMe SSD (Team MP34). I chose to install on ZFS (RAID0) on this disk. I had boot mode previously set to DUAL, but I've changed it to UEFI after the install (SuperMicro won't seem to boot from...
  16. V

    3-node Proxmox/Ceph cluster - how to automatically distribute VMs among nodes?

    @spirit - Any word on open-sourcing your DRS extension for Proxmox? It sounds exciting, would love to see it, even if it early stages.
  17. V

    Using SR-IOV for separate VM Traffic, Ceph and Corosync networks?

    To answer this question - see here: https://forum.proxmox.com/threads/how-do-you-tag-a-interface-in-proxmox-with-a-vlan.61173/ You can create different virtual network interfaces in Linux, each one a different VLAN, then assign them to the Corosync/Ceph networks when you run the wizard.
  18. V

    Feature suggestion: Option to set "auto live migration"

    Yes, I saw the 6.1 release notes. I believe you're referring to it now migrating VMs when you intentionally shut-down a host. However, unless I'm mis-reading the feature, this isn't the same as auto-scheduling of VMs. Many modern hypervisors have a scheduling policy for clusters - where when...
  19. V

    How do you tag a interface in Proxmox with a VLAN?

    Hi, I'm setting up a new 4-node Promox/Ceph HA cluster using 100Gb networking. Each node will have a single 100Gb link. (Later on, we may look at a second 100Gb link for redundancy). Previously, we were using 4 x 10Gb links per node: 1 x 10Gb for VM traffic and management 1 x 10Gb for...
  20. V

    How do you send Command key (⌘) to a VM via noVNC?

    Hi, I have a running MacOS Mojave VM running on Proxmox (per this guide). However, if my local machine is running Linux - how do I sent a Command key (⌘) through to the VM, using the noVNC client? Thanks, Victor