Search results

  1. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Alright, I get it now. I migrated all LXC's succesfully to RBD storage and will alwaus use RBD backed storage for VM & LXC's right now. But I now have an issue concerning disk migration, with the Move disk to storage option on a vm under VMID > Hardware > Hard Disk and then selecting Disk...
  2. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thanks, I got it now, I'm sorry for this mistake, should have known it.. Well, I made the RBD storage on the sandbox cluster before creating it on the production cluster, restored backup of an LXC on the RBD storage booted and worked. Did a live migration and this also works on the latest PVE...
  3. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Well, I tried with the command and then the rbd_lxc storage shows up in the GUI. pve1-sandbox# pvesm add rbd rbd_lxc -pool pve_rbd_pool -data-pool pve_rbd_data_pool When I create the RBD storage by navigating to Datacenter > Storage > Add > RBD, I need to select a Pool but can't create one over...
  4. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    I disabled the krdb flag. pve1-sandbox# pveceph status cluster: id: 966f472b-71aa-455b-ab51-4bf1617bf92e health: HEALTH_OK services: mon: 3 daemons, quorum pve1-sandbox,pve2-sandbox,pve3-sandbox (age 2w) mgr: pve3-sandbox(active, since 2w), standbys: pve2-sandbox...
  5. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thanks, yes, that's true ;) Well, I created it the storage with the following commands: pve1-sandbox# cp /etc/ceph/ceph.conf /etc/pve/priv/ceph/pve_rbd.conf pve1-sandbox# cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/pve_rbd.keyring pve1-sandbox# pvesm add rbd rbd_lxc -pool...
  6. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thank you for your fast reply! Just to be 100% sure, should I create the RBD storage with the option "Use Proxmox VE managed hyper-converged ceph pool" on or off? What's the difference between these? I attached the options show on the screenshots.
  7. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Owkay, I can try this in our sandbox environment. Can I create RBD storage on top of an existing ceph cluster on the PVE nodes itself? I'm unexpierenced with RBD... I found this command to create it on a pve node: pvesm add rbd <storage-name> –pool <replicated-pool> –data-pool <ec-pool> And via...
  8. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    pve1# pveversion -v proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve) pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325) proxmox-kernel-helper: 8.1.0 pve-kernel-5.15: 7.4-14 proxmox-kernel-6.8: 6.8.8-2 proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2 pve-kernel-5.15.158-1-pve: 5.15.158-1...
  9. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    I did a test on one production server with the correct setup for CEPH (no hardware RAID). Upgraded from PVE 7.4-18 to PVE 8.2.4. We also have Debian 12 LXC's and with HA migration to the node with the latest version of PVE, it doesn't want to start. I tested with a Debian LXC with id 102: task...
  10. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thank you for reply. I know CEPH and Raid is not a good option, but on these older servers, I was unable to remove the RAID card and add a passtrough card. I didn't configure any RAID setup on the hardware RAID controller, but I placed every disk in a bypass mode supported by the RAID card. For...
  11. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Yes, the LXC101 was now running succesfully, I didn't want to break it again. Because I was expierincing the same issue with LXC 102, I tried the same steps. But it seems this one has other problems? What I now found out, on PVE2-sandbox disk 4 has a SMART failure, so I guess that disk is...
  12. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    The CEPH system mention some error before the upgrade due to the availibility of host pve1-sandbox:
  13. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    lxc-start 102 20240623124823.632 DEBUG conf - ../src/lxc/conf.c:lxc_fill_autodev:1205 - Bind mounted host device 16(dev/zero) to 18(zero) lxc-start 102 20240623124823.632 INFO conf - ../src/lxc/conf.c:lxc_fill_autodev:1209 - Populated "/dev" lxc-start 102 20240623124823.632 INFO...
  14. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    pve2-sandbox# cat lxc-102.log lxc-start 102 20240623124819.130 INFO confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536 lxc-start 102 20240623124819.130 INFO confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map...
  15. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    The result: pve2-sandbox# lxc-start -n 102 -F -lDEBUG lxc-start: 102: ../src/lxc/sync.c: sync_wait: 34 An error occurred in another process (expected sequence number 7) lxc-start: 102: ../src/lxc/start.c: __lxc_start: 2114 Failed to spawn container "102" lxc-start: 102...
  16. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thanks for your reply! I tried the following: pve3-sandbox# pct fsck 101 fsck from util-linux 2.38.1 /mnt/pve/cephfs/vm-lxc-storage/images/101/vm-101-disk-1.raw: The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really...
  17. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    I'm using the following version on all LXC's: # cat /etc/debian_version 12.4 All hosts in the cluster and LXC's are running the same version. The LXC's only want to boot on pve3-sandbox after a restore from backup. Even after a restore from a backup they don't boot on pve1-sandbox &...
  18. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Yes, I'm sure, I updates the 3 nodes on the same day from the same repo's (confirmed and posted only once below): pve:~# cat /etc/apt/sources.list deb http://ftp.be.debian.org/debian bookworm main contrib deb http://ftp.be.debian.org/debian bookworm-updates main contrib # PVE...
  19. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Hi Thank you for your reply. Without doing anything on the LXC's itself or when they are not in HA mode, the LXC crashes after 2 days also on the same host were it run well brefore the migration to PVE8. I don't know if this information is relevant, but I wanted to mention it. Below you find...
  20. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Dear members of Proxmox forum I have a question about HA issues of LXC's happening after the upgrade of Proxmox VE from version 7.4-18 to 8.2.2. We have multiple Debian 12 LXC's running on our PVE clusters, One PVE cluster as a development environment and one as a production environment. I...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!