Live migrating require same CPU or default kvm64 which is slower due missing cpu flags...
And if pve reboot unplanned...
And btw, OP is meaning VM reboot, not pve. Just think about kernel update & reboot...
Two ways (based on non workload - so assume ceph pool is empty):
1] monitor's ip stays - manually change ceph config file and set cluster network, reboot
2] monitor's ip change - remove all monitors, change ceph configfile, recreate monitors
Public (aka internet) network will never work with 9k...
2] What you will do, when you connect other pve host? You will need rework corosync network. Think twice about it. 1G switches are cheap.
3] What if your connection/switch fail in backup/vm migration/etc? Use LACP and don't even think about single connection.
I have little problem decyphering "connection for clients to the backend". What is client, what is backend?
If you have pve frontend on the same subnet as all users in company - from my point of view its security and performance NO. Any broadcast can overhelm this subnet. Create management...
If you have slots for 1Gbe, use it for corosync.
Now - split ceph to 2 networks
C1] ceph cluster (osd etc) - 100Gbe
C2] ceph public ( monitors = client access) - 10Gbe minimal
Now Proxmox side:
P1] pve cluster (corosync) - 1Gbe primary, 1Gbe secondary (or use ceph backend or pve frontend)
P2]...
After update from pve 6.2 to pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve) i had the same problem - 2nd link was down on pve host. Links are managed by openvswitch in LACP 802.3ad mode. Switch side showed both links up and LACPed.q
Feb 15 10:38:00 pve-backup-1 kernel: [...
Hi
based on timers:
Thu 2020-08-27 04:21:58 CEST 17h left Wed 2020-08-26 03:57:57 CEST 6h ago pmg-daily.timer pmg-daily.service
I will expect to sa-update automatically ran. But:
root@pmg-01:/var/log/pve/tasks/C# cat...
I don't use proxmox with EFI and even mixed with mdraid. Try other way - mount and boot from repair cd your "new raided system", regenerate grub...Can't help more.
Because you have some spare space on vg pve...
1] prepare ssd2 as raid1 disk (hw raid, mdraid, zfs etc)
2] create vg group "not pve" on that raid1 (if you want pve as vg group name)
3] copy logical volumes from ssd1 to ssd2 - for example, create new lv on ssd2/vg_name, use dd for copy...
1] backup VM to pbs as some user to repository A (acl assigned)
2] add other user to repository A (asl assigned)
3] change user in PVE for backup
4] backup error: other_user@pbs != some_user@pbs
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.