One thing that annoys me is if you are logged in to host A and shut it down, you then have to open a new session manually on the other host (my use case is a 2-host cluster with qdevice). I'm wondering if it's crazy to use keepalived to implement a virtual IP for the cluster? I know it won't...
I'm confused by the above instructions. If the backup and backup-proxy services have things locked, how will restarting them help? You'll still be locked, no? I'm backing up to a hotplug SSD, and if I do the restart command above, I'm still unable to export the pool (granted, export is...
I have two datastores for PBS: a 4x2 raid10 (ZFS) and a ZFS mirror. GC for the latter works just fine. For the former, every time it runs, in phase 1 (marking), I see:
found (and marked) 602 index files outside of expected directory scheme. (602 is the total number of chunks)...
Yes, /etc/hostname and /etc/hosts show pve.MYDOMAIN.com. And cluster status shows pve1, pve2 and pve3. It seems as if the ceph initialization code picks localhost for some reason...
So I have three nodes, pve1, pve2 and pve3. Because I started off with pve1, the mon is called 'mon.localhost' and the manager is also 'localhost'. I'm assuming this is all OK, but it looks weird to see mons localhost, pve2 and pve3, as well as managers with the same naming convention. I also...
But why then is that assertion failure happening? In the event, I had been thinking about replacing that processor anyway, since that host gets too busy due to having 6 cores/threads that are significantly slower. I will check the microcode you referenced.
The 3 hosts:
32 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (2 Sockets)
6 x Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (1 Socket)
16 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (1 Socket)
Aha, I think I see what happened:
Dec 05 09:40:40 pve1 QEMU[1625064]: kvm: warning: TSC frequency mismatch...
Running a new install of 7.2, upgraded to 7.3. Guests live on a ceph/rbd datastore. I've noticed a couple of times when doing a live migration, that the guest seems to be migrated succesfully, but then fails to be up and running on the target host. Log snippet:
2022-12-05 09:40:36 migration...
Migrating off vsphere. My backup strategy had been: daily, weekly and monthly to a ZFS raid10 using veeam. 1st of each month, I'd hotplug a 1TB SSD and run a manual veeam job, scrub the hotplug pool, export it, and shelf it safely. Is there any way to do that with PBS? I've got a datastore...
Well, I dunno about you, but seeing the above, the first words that pop into my head are NOT "gosh, that is intuitively obvious!" How hard is it to print "EFI disk cannot be moved while VM is running!"
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.