I am installing Proxmox 6.3 on an HP Synergy Blade chassis. It has 40Gb qlogic networking modules. When I install Proxmox 6.2 or 6.3, I notice that 2 of my NICs that I've allocated to management never acknowledge their link. I've tried Ubuntu 20.04, CentOS 8.x, VMware 7.0 and stock Debian 10.6...
So I found out why this was failing. I had ceph pool size: 3 and min_size: 2. Which apparently is the number of other cluster members to function. Setting it to size: 2 and min_size: 1 allowed me do what I needed to do. Apparently those were the numbers I had used successfully previously.
That is understood and that is what we are doing. But in previous deployments, we'd spin up the first node with ceph, import a VM. then bring up another ceph node and it imports its VM, and finally spin up the 3rd node and the third VM. I don't recall ever having ceph refuse to move a volume to...
I've provisioned 1 Proxmox host with Ceph that has 2 SSDs for OSDs. Ceph is configured and running. I've created a ceph pool called "ceph-dev". When I attempt to move a VM from local storage to Ceph I am getting a lock error.
storage migration failed: error with cfs lock 'storage-ceph-dev': rbd...
I'm seeing quite a few ZFS replication failures and I'm not sure how to diagnose the root cause.
I'm just replicating 1 VM to a second Proxmox host. The second host is replicating 1 VM back to the first. Each VM has 2 virtual disks with iothread=1. I've had the same issue with iothread...
@wolfgang these settings seem to have helped greatly, but I could use some additional help. I have the ZFS replication set to every minute. While stress testing the systems, we had a replication failure and timeout that lead to a fault in our system. Is this likely because every minute is too...
I'm still need to stress test the system during replication to make sure it can handle what we expect, but the basic configuration I'm using is this...
@wolfgang thank you for your reply. I stumbled on the freeze and thaw settings in the /etc/sysconfig/qemu-ga
I've set the FSFREEZE_HOOK_PATHNAME=/dev/null and blacklisted the freeze and thaw commands. I'm testing how that impacts our app. So far so good.
I have 2 Proxmox 6.0 nodes with replication setup between the nodes for a single VM. The replication works, but the freeze and thaw of the guest filesystem causes problems with the application running in the VM. It can't handle the brief pause. Is there a way to do the replication withOUT the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.