Hi, we got the very same issue - after manually forcing update-fingerprints it works again, yet we too see this issue basically every two months.
Which logs would be required to get to you?
Hi,
in a pve7.4 cluster setup (4 nodes) one sadly died.
Before final "delnode" removed all entries from /etc/pve/replication.cfg that had this node as a target and re-grouped the replication to the remaining nodes.
Then regrouped all replication-groups, only then according to...
indeed it is igb:
[ 278.174316] igb 0000:01:00.3 ens9f3: igb: ens9f3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 278.241811] igb 0000:01:00.0 ens9f0: igb: ens9f0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 278.733213] igb 0000:01:00.2 ens9f2: igb: ens9f2 NIC Link is...
Hi,
tested .104 on two non-cluster-systems - working fine so far (and only then tried the cluster).
Then tried the .104 on a cluster and things went sideways a lot (aka certain reboots, nfs4 hung while nfs3 fine, but not on all systems).
The system in question is a cluster member and HA is...
the pve kernel 5.15.104-1 seems a bit unstable in real life - it kind of out of no where reboots and we got only this in the kernel.log:
Apr 8 11:41:44 k14 kernel: [746103.306950] vmbr2: port 30(tap130024i0) entered disabled state
Apr 8 11:51:04 k14 kernel: [ 0.000000] Linux version...
after looking into the sourcecode I would agree it should be handled :)
yet the only log-entry in syslog I find looks like fingerprint changed and that's it...
Jan 24 02:19:35 pmgconfig[25552]: fingerprint...
Got the same issue here, but on two PMG7.2-4 models, they do LE fine, but fingerprints are not handled.
manually doing "pmgcm update-fingerprints" works, but is a bit annoying (giving the idea of putting this into cron real considerations)
any way to debug pmg-letsencrypt-renewals and the...
Just tried a node reboot with automigration - and watched it failing into a loop because of snapshots :)
Would be nice if migration would fail for snapshot-reasons (or having a locally connected cdrom) within the VM to have an option to get things to migrate with stop+start or simply eject a...
it seems we got the same issue on proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve)
could someone clarify if there is simply one cache not suitable for more than one zfs storage the issue or some other issue at work?
well, thanks to @fabian - the environment variable PVE_MIGRATED_FROM is in fact a game changer for hookscripts.
this way we can detect during pre-startup if cleaning up at the former host is needed and script some ssh-calls on the former host to remove now inactive configurations. not totally...
Late reply as I’m also hunting for migration hooks.
For compartmenting networking for some VMs you might consider bridges bound to only this network environment only doing layer2 for your VM which would in turn have to do its own networking.
We got two KVM guests, one ubuntu 20.04 LTS and one debian v10.
one has a 100GB disk, using roughly 54 GB, the other a 240GB disk, using roughly 146GB.
both are replicated using pvesr, both basically have static content in their filesystems (lots of images never changed).
zfs list (and -t...
systems are at the latest available (imho) - will report back if we create this again :)
pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-5.15: 7.3-1
pve-kernel-helper: 7.3-1
pve-kernel-5.15.83-1-pve: 5.15.83-1...
We're still testing zfs replication using pvesr.
One issue we ran into was a vzdump snapshot being created during backup, while then pvesr did it's thing and we got stuck with this vzdump snapshot, that when trying to remove responded: snapshot 'vzdump' needed by replication job - run...
Hi,
we are testing pvesr to replicate several VEs for easy migration between several nodes.
Basically this is working nice, but if node A replicates a big chunk (100+GB) to node B while B trying to replicate another VE to A this results in hangs within this VE while zfs replication tries and...
Also - and there I'd argue for consistency the pre-start is executed on migration, which if the argument would be "live migration" also probably shouldn't happen. Or - if the feature should not be changed by default which I'm all in favor of, give us an option to enable pre-start and post-stop...
Hi,
I take whatever hook we could get, I'd even take a start-stop-migration for KVM option in the HA management as with live we can not handle this issue in any way - and better to have a working networking-setup than a migrated KVM which is now offline for the world...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.