Don't know what to say :/
This weird behavior is clearly triggered by PBS snapshots.
I can also mention that it affects large MySQL databases, (same issue on Zabbix Mysql database (~20 Gbytes)) without table partitioning.
Logs are related to external PBS Backup (Tuxis)
However, local PBS backup (@3:00 am) with pretty fast PBS datastore (2 stripped raidz : 10x 2TB SSD) with 10Gb network, shows locks of Mysql too
But backup takes 10 min instead of 20 min via Tuxis.
Mysql logs
lots of :
2022-05-31 20:00:00...
Hi,
I noticed that during a PBS backup of a large MySQL VM (with guest agent installed), Mysql is not able to perform requests (select, update, insert)...
I thought that the FS lock triggered by qemu-ga guest-fsfreeze takes only few seconds in order to have a consistent FS.
Is it the normal...
Hi,
I have recently replaced all my HDD for 10 SSD for my ZFS Datastore.
This is now extremely fast specially verify jobs (even if we can discuss the interest of verifying snapshots on a ZFS pool).
The weird thing is that PBS reports fragmentation on the related pool :
AFAIK, on a SSD...
Hi,
Seems that latest pve7.1 changes the way we record totp on /etc/pve/priv/tfa.cfg.
I have a problem with that because the 'Secret' is now showed
V2 :
V1:
V2 method allows root to know all secrets keys for all users. Some of them use the same TOTP secret for several services...
You 're right. Pinging a gw during the live migration did the trick :)
Only lost one ping.
I guess the CT was inactive on my first attempt.
This is a good alternative to have CT with live migration !
I am able to live migrate the virtualized pve with few lxc inside. but lxc lost connectivity for about 1 min :/
I was expecting lxc containers should be still reachable during the live migration :/
Of course, this virtualized PVE should be a standalone server with only LXC VMs inside.
Yes, we should plan to recreate these LXC VMs to KVM. Or just simply remove them when convenient...
Thanks :)
Hi,
We usually make KVM VMs. But for history reasons we still have some old containers. Openvz containers which were migrated to lxc since PVE4.
Now with pve7 some LXCs refuse to start properly (mainly due to old systemd < 232). Also, LXC live migration is impossible.
So I'm wondering if...
don't leave me in the dark :p
Maybe upgrades should be done with lrm (pve-ha-lrm) & crm (pve-ha-crm) services stopped in order to prevent such reboots ?
Hi,
I tried to upgrade a 6 nodes pve7 cluster yesterday.
We use Ceph and HA for all VMs.
I was able to upgrade 4 nodes without issue.
But on the fifth node, I lost the whole cluster. All nodes rebooted !
Syslogs for nodes 22 to 27 :
update ok for nodes 22,23,24 and 27
upgrade of node 25...
Hi,
My cluster was also affected by this issue.
kernel 5.11.22-9 installed.
I keep you in touch too.
May I say that packages are moving so fast from testing/no-subscription to enterprise repository ? :p
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.