Search results

  1. gurubert

    Clean shutdown of whole cluster

    Hi, what is the recommended procedure to shutdown a complete PVE cluster including HA resources? The manual only talks about maintenance of single nodes, but sometimes it is necessary to shutdown everything. We have observed that a simple shutdown on all nodes is not sufficient as HA fencing...
  2. gurubert

    ceph-volume@.service also needs a ceph-after-pve-cluster.conf

    The systemd unit ceph-volume@.service activates the local OSDs. To do that it needs the ceph.conf. On Proxmox nodes /etc/ceph/ceph.conf is a symlink to /etc/pve/ceph.conf and therefor only available after pve-cluster.service is running. Please add a ceph-after-pve-cluster.conf into...
  3. gurubert

    Dell PowerEdge R440 install fails: no ISO found

    Hi, when booting the PVE 7.1 ISO from an USB stick the ISO cannot be found: In the end it bails out with the error: no device with valid ISO found, please check your installation medium. This is already the second USB stick tried. A Debian 11 installation on the same machine works without...
  4. gurubert

    [SOLVED] Scheduling of Verify, Prune and Garbage Collection

    Hi, The backup jobs run during the night between 23:00 and roughly 7:00. When configuring the schedule for verify jobs, prune jobs and garbage collection I only have a limited choice of hours that will conflict with the running jobs: Is it possible to extend that list so that these jobs...
  5. gurubert

    PVE Cluster Service IP

    I would like to ask for a new feature: Service IP for a PVE cluster. I.e. the Proxmox cluster manager should be able to configure an IP address that is always active on one of the nodes (maybe the cluster leader). This would make it easier for automated tools to talk to the API. In case of a...
  6. gurubert

    Unique device ID for virtual hard disks?

    Hi, currently virtual hard disk show up like this under /dev/disk/by-id: lrwxrwxrwx 1 root root 9 Sep 2 13:47 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0 -> ../../sda lrwxrwxrwx 1 root root 10 Sep 2 13:47 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Sep 2...
  7. gurubert

    LXC: Wrong STIME on processlist inside container

    After upgrading PVE from 5.4 to 6.1 I ran into this issue: https://github.com/lxc/lxcfs/issues/189 ps inside the container show a process start date in the future. Is there a fix planned?
  8. gurubert

    Storage live migration and thin provisioning

    Hi, the storage live migration creates a thick provisioned target even if a format like qcow2 is selected. I only found the announcement for PVE 3.0 where someone mentioned to use qemu-img convert afterwards to create a sparse image file. But how would this be done on a running VM?
  9. gurubert

    Migration from PVE 3 to 5 with GFS2 on shared storage

    Hi, I need to migrate a two node cluster from PVE 3.2 to 5.1. The cluster uses a shared storage device connected via SAS to the nodes running GFS2 on top of the shared block devices. GFS2 uses lock_dlm as locking manager, managed by the cluster manager of PVE. Is it possible to install PVE 5.1...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!