Hi,
During the migrate the VM faced a temporary lock issue on the replication bitmaps as you see above two lines. This can occur if another replication or backup task is holding these resources during migration. So the question is; there is a replication or backup process during the migrateion?
Hi,
First, I would check the syslog what to know what happened!
Then check the root disk space, you can run `df -h` command to see if the `/var` or/and `/` directory is full.
Did you try to restart the failed services?
systemctl restart pve-cluster pveproxy pvedaemon pvestatd pvescheduler...
Hi,
Do you see anything in the syslog before and after the reboot in 30~ min?
FYI, you can get the syslog with specific time/date in the Proxmox VE Web UI by going to `Datacenter -> {NodeName} -> System Log` Or using journalctl CLI e.g.:
journalctl --since '2024-10-04 00:00:00' --until...
Hi,
Can you please post the full output of the `qm start 100` command? You can run this command in the SSH or you can post the information included in the task error `Error: start failed: QEMU existed with code 1` in the [Task Log]. Also, verify the configuration of the SMB storage if the...
Hi,
To help you better, please provide us with the Proxmox VE network configuration `cat /etc/network/interfaces` and the VM config `qm config <VMID>` replace the `<VMID>` with the ID of the VM mentioned. That can help us to identify if the issue is in the network config or the VM config.
Hi,
Can you please check the `pct enter 101` of the LXC?
And did you see other above provided output in the syslog? you can run `journalctl -f` and try to access the LXC Console in the Proxmox VE Web UI.
Ceph needs a MON majority for quorum; losing this halts operations.... An option is to add a lightweight `tiebreaker` node to the cluster, which doesn't need to store data but can maintain quorum. Plus consider using the `noout` flag during maintenance to avoid triggering recovery processes.
By default, a Ceph cluster requires a majority of nodes to maintain quorum!
Note, with min_size=1, there is a risk of data loss if only one replica is available during a write operation.
Hi,
Thank you for the outputs!
Could you please post the output of `hostname -A` command as well?
EDIT: and the output of the below command, please!
ls /etc/pve/nodes
Hi,
You could install the QDevice on a Raspberry Pi, it's an affordable way to provide the necessary 3rd vote for qourm without the need for a full node + will not cost like add a 3th node.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.