Recent content by dswartz

  1. D

    Virtual IP for GUI?

    One thing that annoys me is if you are logged in to host A and shut it down, you then have to open a new session manually on the other host (my use case is a 2-host cluster with qdevice). I'm wondering if it's crazy to use keepalived to implement a virtual IP for the cluster? I know it won't...
  2. D

    [SOLVED] Remove (removable) datastore

    I'm confused by the above instructions. If the backup and backup-proxy services have things locked, how will restarting them help? You'll still be locked, no? I'm backing up to a hotplug SSD, and if I do the restart command above, I'm still unable to export the pool (granted, export is...
  3. D

    Bug involving namespaces and .chunks directory?

    I have two datastores for PBS: a 4x2 raid10 (ZFS) and a ZFS mirror. GC for the latter works just fine. For the former, every time it runs, in phase 1 (marking), I see: found (and marked) 602 index files outside of expected directory scheme. (602 is the total number of chunks)...
  4. D

    Curious about 'localhost' for a ceph cluster

    Worked perfectly. stop mon.localhost, destroy mon.localhost, create mon (specify pve1), start mon.pve1. Repeat above for mgr.localhost. Thanks!
  5. D

    Problem with live migration?

    Worked perfectly. Did stop, then destroy, the create for mgr and mon. Thanks!
  6. D

    Curious about 'localhost' for a ceph cluster

    I haven't had a chance to try yet - busy at my day job :)
  7. D

    Curious about 'localhost' for a ceph cluster

    Yes, /etc/hostname and /etc/hosts show pve.MYDOMAIN.com. And cluster status shows pve1, pve2 and pve3. It seems as if the ceph initialization code picks localhost for some reason...
  8. D

    Curious about 'localhost' for a ceph cluster

    So I have three nodes, pve1, pve2 and pve3. Because I started off with pve1, the mon is called 'mon.localhost' and the manager is also 'localhost'. I'm assuming this is all OK, but it looks weird to see mons localhost, pve2 and pve3, as well as managers with the same naming convention. I also...
  9. D

    Problem with live migration?

    Microcode installed. I will reboot the 3 hosts later... Thanks for the help!
  10. D

    Problem with live migration?

    But why then is that assertion failure happening? In the event, I had been thinking about replacing that processor anyway, since that host gets too busy due to having 6 cores/threads that are significantly slower. I will check the microcode you referenced.
  11. D

    Problem with live migration?

    I've ordered a replacement CPU identical to the other 3...
  12. D

    Problem with live migration?

    I think I have an idea why this just started happening - I changed the CPU type of running guests from the default kvm64 to host.
  13. D

    Problem with live migration?

    It's curious it doesn't always happen though. All 3 nodes are up to date, according to 'apt update and etc...'
  14. D

    Problem with live migration?

    The 3 hosts: 32 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (2 Sockets) 6 x Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (1 Socket) 16 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (1 Socket) Aha, I think I see what happened: Dec 05 09:40:40 pve1 QEMU[1625064]: kvm: warning: TSC frequency mismatch...