Recent content by dswartz

  1. D

    [SOLVED] Remove (removable) datastore

    I'm confused by the above instructions. If the backup and backup-proxy services have things locked, how will restarting them help? You'll still be locked, no? I'm backing up to a hotplug SSD, and if I do the restart command above, I'm still unable to export the pool (granted, export is...
  2. D

    Bug involving namespaces and .chunks directory?

    I have two datastores for PBS: a 4x2 raid10 (ZFS) and a ZFS mirror. GC for the latter works just fine. For the former, every time it runs, in phase 1 (marking), I see: found (and marked) 602 index files outside of expected directory scheme. (602 is the total number of chunks)...
  3. D

    Curious about 'localhost' for a ceph cluster

    Worked perfectly. stop mon.localhost, destroy mon.localhost, create mon (specify pve1), start mon.pve1. Repeat above for mgr.localhost. Thanks!
  4. D

    Problem with live migration?

    Worked perfectly. Did stop, then destroy, the create for mgr and mon. Thanks!
  5. D

    Curious about 'localhost' for a ceph cluster

    I haven't had a chance to try yet - busy at my day job :)
  6. D

    Curious about 'localhost' for a ceph cluster

    Yes, /etc/hostname and /etc/hosts show pve.MYDOMAIN.com. And cluster status shows pve1, pve2 and pve3. It seems as if the ceph initialization code picks localhost for some reason...
  7. D

    Curious about 'localhost' for a ceph cluster

    So I have three nodes, pve1, pve2 and pve3. Because I started off with pve1, the mon is called 'mon.localhost' and the manager is also 'localhost'. I'm assuming this is all OK, but it looks weird to see mons localhost, pve2 and pve3, as well as managers with the same naming convention. I also...
  8. D

    Problem with live migration?

    Microcode installed. I will reboot the 3 hosts later... Thanks for the help!
  9. D

    Problem with live migration?

    But why then is that assertion failure happening? In the event, I had been thinking about replacing that processor anyway, since that host gets too busy due to having 6 cores/threads that are significantly slower. I will check the microcode you referenced.
  10. D

    Problem with live migration?

    I've ordered a replacement CPU identical to the other 3...
  11. D

    Problem with live migration?

    I think I have an idea why this just started happening - I changed the CPU type of running guests from the default kvm64 to host.
  12. D

    Problem with live migration?

    It's curious it doesn't always happen though. All 3 nodes are up to date, according to 'apt update and etc...'
  13. D

    Problem with live migration?

    The 3 hosts: 32 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (2 Sockets) 6 x Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (1 Socket) 16 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (1 Socket) Aha, I think I see what happened: Dec 05 09:40:40 pve1 QEMU[1625064]: kvm: warning: TSC frequency mismatch...
  14. D

    Problem with live migration?

    Running a new install of 7.2, upgraded to 7.3. Guests live on a ceph/rbd datastore. I've noticed a couple of times when doing a live migration, that the guest seems to be migrated succesfully, but then fails to be up and running on the target host. Log snippet: 2022-12-05 09:40:36 migration...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!