Search results

  1. D

    Virtual IP for GUI?

    One thing that annoys me is if you are logged in to host A and shut it down, you then have to open a new session manually on the other host (my use case is a 2-host cluster with qdevice). I'm wondering if it's crazy to use keepalived to implement a virtual IP for the cluster? I know it won't...
  2. D

    [SOLVED] Remove (removable) datastore

    I'm confused by the above instructions. If the backup and backup-proxy services have things locked, how will restarting them help? You'll still be locked, no? I'm backing up to a hotplug SSD, and if I do the restart command above, I'm still unable to export the pool (granted, export is...
  3. D

    Bug involving namespaces and .chunks directory?

    I have two datastores for PBS: a 4x2 raid10 (ZFS) and a ZFS mirror. GC for the latter works just fine. For the former, every time it runs, in phase 1 (marking), I see: found (and marked) 602 index files outside of expected directory scheme. (602 is the total number of chunks)...
  4. D

    Curious about 'localhost' for a ceph cluster

    Worked perfectly. stop mon.localhost, destroy mon.localhost, create mon (specify pve1), start mon.pve1. Repeat above for mgr.localhost. Thanks!
  5. D

    Problem with live migration?

    Worked perfectly. Did stop, then destroy, the create for mgr and mon. Thanks!
  6. D

    Curious about 'localhost' for a ceph cluster

    I haven't had a chance to try yet - busy at my day job :)
  7. D

    Curious about 'localhost' for a ceph cluster

    Yes, /etc/hostname and /etc/hosts show pve.MYDOMAIN.com. And cluster status shows pve1, pve2 and pve3. It seems as if the ceph initialization code picks localhost for some reason...
  8. D

    Curious about 'localhost' for a ceph cluster

    So I have three nodes, pve1, pve2 and pve3. Because I started off with pve1, the mon is called 'mon.localhost' and the manager is also 'localhost'. I'm assuming this is all OK, but it looks weird to see mons localhost, pve2 and pve3, as well as managers with the same naming convention. I also...
  9. D

    Problem with live migration?

    Microcode installed. I will reboot the 3 hosts later... Thanks for the help!
  10. D

    Problem with live migration?

    But why then is that assertion failure happening? In the event, I had been thinking about replacing that processor anyway, since that host gets too busy due to having 6 cores/threads that are significantly slower. I will check the microcode you referenced.
  11. D

    Problem with live migration?

    I've ordered a replacement CPU identical to the other 3...
  12. D

    Problem with live migration?

    I think I have an idea why this just started happening - I changed the CPU type of running guests from the default kvm64 to host.
  13. D

    Problem with live migration?

    It's curious it doesn't always happen though. All 3 nodes are up to date, according to 'apt update and etc...'
  14. D

    Problem with live migration?

    The 3 hosts: 32 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (2 Sockets) 6 x Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (1 Socket) 16 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (1 Socket) Aha, I think I see what happened: Dec 05 09:40:40 pve1 QEMU[1625064]: kvm: warning: TSC frequency mismatch...
  15. D

    Problem with live migration?

    Running a new install of 7.2, upgraded to 7.3. Guests live on a ceph/rbd datastore. I've noticed a couple of times when doing a live migration, that the guest seems to be migrated succesfully, but then fails to be up and running on the target host. Log snippet: 2022-12-05 09:40:36 migration...
  16. D

    Is it possible to have a hotplug datastore?

    That's what I was afraid of, thanks. My solution (for now): create a directory datastore, do the backup to that, then rsync to the hotplug zpool.
  17. D

    Is it possible to have a hotplug datastore?

    The hotplug disk for that datastore is a ZFS pool. If there is a datastore on it, I cannot 'zpool export hotplug' (hotplug is the zpool name).
  18. D

    Is it possible to have a hotplug datastore?

    Migrating off vsphere. My backup strategy had been: daily, weekly and monthly to a ZFS raid10 using veeam. 1st of each month, I'd hotplug a 1TB SSD and run a manual veeam job, scrub the hotplug pool, export it, and shelf it safely. Is there any way to do that with PBS? I've got a datastore...
  19. D

    TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled

    Well, I dunno about you, but seeing the above, the first words that pop into my head are NOT "gosh, that is intuitively obvious!" How hard is it to print "EFI disk cannot be moved while VM is running!"

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!