Just for the protocol, the backup on the latest version failed as well.
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)
pve-manager: 7.0-13 (running version: 7.0-13/7aa7e488)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.4: 6.4-4
pve-kernel-5.11.22-5-pve: 5.11.22-10...
Two days ago I updated to the latest version, rebooted the whole host and after the first backup, one VM failed again. Unfortunately, the log still doesn't tell me something interesting...
While searching the needle in the haystack, I just changed the NFS version from 3 to 4. Let's see whether...
Alright, I installed qemu-server-dbgsym, hard restarted all VMs and after a couple of backup iterations, it failed again. Unfortunately, I don't see much more information...
INFO: starting new backup job: vzdump --compress lzo --storage proxmox_daily --all 1 --quiet 1 --mode snapshot...
Hi @Moayad
It seems like I'm already using this version :-(
proxmox-ve: 7.0-2 (running kernel: 5.11.22-4-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-7
pve-kernel-helper: 7.0-7
pve-kernel-5.4: 6.4-4
pve-kernel-5.11.22-4-pve: 5.11.22-8...
Hi
I've exactly the same issue since I upgraded to 7. The backup fails every few days on random VMs and then some VMs change the root file system to read-only afterward. Really annoying...
Just for the documentation, here is my output of two different cases:
INFO: VM Name: UServer23
INFO...
Please have in mind, that cloning using qm works as mentioned in my first post. The problem only exists in the gui.
Here the journal log. I don't see anything useful before.
Around 11:12:04 i started the clone using qm and Interrupted that as soon as it started to clone. At 11:12:18 I started...
Here the output of the journal.
Jun 10 14:47:33 proxmox05 pvedaemon[61405]: <user@domain> move disk VM 249: move --disk sata0 --storage qnap01.nfs.ssd.vm
Jun 10 14:47:33 proxmox05 pvedaemon[10206]: moving disk with snapshots, snapshots will not be moved!
Jun 10 14:47:33 proxmox05...
Hi Oguz
Thanks for your reply!
I tried to move it and got the same error as well.
Here is my config
qm config 249
bios: ovmf
boot: order=sata0;ide2;net0
cores: 8
cpu: host
efidisk0: qnap01.nfs.hdd.vm:249/vm-249-disk-1.qcow2,size=128K
ide2...
We have an issue in our Proxmox cluster. It seems not possible to clone a VM or move the disk to another storage, if the VM has more than 13 snapshots + current. Till 13 snapshots + current works fine. The error message I get is:
moving disk with snapshots, snapshots will not be moved!
create...
Hi Bernd
I could finally fix my issue and it was a stupid configuration error! I didn't re-setup the cluster yet, but I'm going to do so now, since my Corosync runs on the same interface as the storage and as I read, that should be separated.
My problem was that the existing servers were using...
Thanks for your reply.
I'm using Proxmox since years and have multiple clusters setup, but never had issues like that. I was really surprised about that.
Since nobody seems to know whats wrong here, I'm now going to separate one host from the existing cluster and build a new cluster with the...
Hi Pierre-Yves
Yes, ping works with IP address. I didn't add the hosts to /etc/hosts, since this is not mandatory according to this page: https://pve.proxmox.com/wiki/Cluster_Manager#_preparing_nodes.
No, is this required?
Regards
Dear Proxmox people
We really need help with this issue! Can PLEASE someone have a look at this post and maybe give some advise what we could try?
Btw, I'm running the following software:
root@proxmox01:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3...
Hi Bernd
No, I didn't clean it up.
I tried to add the server (fresh installed) two times and it failed the same way as it did now with the force command. It seems something blocks pve-cluster service from restarting, but the golden question is what...
Regards
Mathias
Hi
I just found out that the existing cluster suddenly thinks it is no longer in a cluster, but everything seems to work fine.
Datacenter -> Cluster -> "Standalone node - no cluster defined"
But Cluster nodes shows 5 nodes, where the 5th is still not working.
pvesh shows that the certificate...
Hi Bernd
Interesting, I tried exactly the same and failed the same way as you did!
I've also 4 nodes in a cluster and tried to add a 5th. I used the webgui to join the cluster. It failed the same way as you described. Managing was no longer possible as long as the 5th server was in the net...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.