Hi,
what is the recommended procedure to shutdown a complete PVE cluster including HA resources?
The manual only talks about maintenance of single nodes, but sometimes it is necessary to shutdown everything.
We have observed that a simple shutdown on all nodes is not sufficient as HA fencing...
The systemd unit ceph-volume@.service activates the local OSDs. To do that it needs the ceph.conf.
On Proxmox nodes /etc/ceph/ceph.conf is a symlink to /etc/pve/ceph.conf and therefor only available after pve-cluster.service is running.
Please add a ceph-after-pve-cluster.conf into...
Hi,
when booting the PVE 7.1 ISO from an USB stick the ISO cannot be found:
In the end it bails out with the error: no device with valid ISO found, please check your installation medium.
This is already the second USB stick tried.
A Debian 11 installation on the same machine works without...
Hi,
The backup jobs run during the night between 23:00 and roughly 7:00.
When configuring the schedule for verify jobs, prune jobs and garbage collection I only have a limited choice of hours that will conflict with the running jobs:
Is it possible to extend that list so that these jobs...
I would like to ask for a new feature:
Service IP for a PVE cluster.
I.e. the Proxmox cluster manager should be able to configure an IP address that is always active on one of the nodes (maybe the cluster leader).
This would make it easier for automated tools to talk to the API. In case of a...
Hi,
currently virtual hard disk show up like this under /dev/disk/by-id:
lrwxrwxrwx 1 root root 9 Sep 2 13:47 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0 -> ../../sda
lrwxrwxrwx 1 root root 10 Sep 2 13:47 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Sep 2...
After upgrading PVE from 5.4 to 6.1 I ran into this issue: https://github.com/lxc/lxcfs/issues/189
ps inside the container show a process start date in the future.
Is there a fix planned?
Hi,
the storage live migration creates a thick provisioned target even if a format like qcow2 is selected.
I only found the announcement for PVE 3.0 where someone mentioned to use qemu-img convert afterwards to create a sparse image file. But how would this be done on a running VM?
Hi,
I need to migrate a two node cluster from PVE 3.2 to 5.1.
The cluster uses a shared storage device connected via SAS to the nodes running GFS2 on top of the shared block devices.
GFS2 uses lock_dlm as locking manager, managed by the cluster manager of PVE.
Is it possible to install PVE 5.1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.