Recent content by Jera92

  1. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Alright, I get it now. I migrated all LXC's succesfully to RBD storage and will alwaus use RBD backed storage for VM & LXC's right now. But I now have an issue concerning disk migration, with the Move disk to storage option on a vm under VMID > Hardware > Hard Disk and then selecting Disk...
  2. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thanks, I got it now, I'm sorry for this mistake, should have known it.. Well, I made the RBD storage on the sandbox cluster before creating it on the production cluster, restored backup of an LXC on the RBD storage booted and worked. Did a live migration and this also works on the latest PVE...
  3. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Well, I tried with the command and then the rbd_lxc storage shows up in the GUI. pve1-sandbox# pvesm add rbd rbd_lxc -pool pve_rbd_pool -data-pool pve_rbd_data_pool When I create the RBD storage by navigating to Datacenter > Storage > Add > RBD, I need to select a Pool but can't create one over...
  4. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    I disabled the krdb flag. pve1-sandbox# pveceph status cluster: id: 966f472b-71aa-455b-ab51-4bf1617bf92e health: HEALTH_OK services: mon: 3 daemons, quorum pve1-sandbox,pve2-sandbox,pve3-sandbox (age 2w) mgr: pve3-sandbox(active, since 2w), standbys: pve2-sandbox...
  5. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thanks, yes, that's true ;) Well, I created it the storage with the following commands: pve1-sandbox# cp /etc/ceph/ceph.conf /etc/pve/priv/ceph/pve_rbd.conf pve1-sandbox# cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/pve_rbd.keyring pve1-sandbox# pvesm add rbd rbd_lxc -pool...
  6. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thank you for your fast reply! Just to be 100% sure, should I create the RBD storage with the option "Use Proxmox VE managed hyper-converged ceph pool" on or off? What's the difference between these? I attached the options show on the screenshots.
  7. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Owkay, I can try this in our sandbox environment. Can I create RBD storage on top of an existing ceph cluster on the PVE nodes itself? I'm unexpierenced with RBD... I found this command to create it on a pve node: pvesm add rbd <storage-name> –pool <replicated-pool> –data-pool <ec-pool> And via...
  8. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    pve1# pveversion -v proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve) pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325) proxmox-kernel-helper: 8.1.0 pve-kernel-5.15: 7.4-14 proxmox-kernel-6.8: 6.8.8-2 proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2 pve-kernel-5.15.158-1-pve: 5.15.158-1...
  9. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    I did a test on one production server with the correct setup for CEPH (no hardware RAID). Upgraded from PVE 7.4-18 to PVE 8.2.4. We also have Debian 12 LXC's and with HA migration to the node with the latest version of PVE, it doesn't want to start. I tested with a Debian LXC with id 102: task...
  10. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Thank you for reply. I know CEPH and Raid is not a good option, but on these older servers, I was unable to remove the RAID card and add a passtrough card. I didn't configure any RAID setup on the hardware RAID controller, but I placed every disk in a bypass mode supported by the RAID card. For...
  11. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    Yes, the LXC101 was now running succesfully, I didn't want to break it again. Because I was expierincing the same issue with LXC 102, I tried the same steps. But it seems this one has other problems? What I now found out, on PVE2-sandbox disk 4 has a SMART failure, so I guess that disk is...
  12. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    The CEPH system mention some error before the upgrade due to the availibility of host pve1-sandbox:
  13. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    lxc-start 102 20240623124823.632 DEBUG conf - ../src/lxc/conf.c:lxc_fill_autodev:1205 - Bind mounted host device 16(dev/zero) to 18(zero) lxc-start 102 20240623124823.632 INFO conf - ../src/lxc/conf.c:lxc_fill_autodev:1209 - Populated "/dev" lxc-start 102 20240623124823.632 INFO...
  14. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    pve2-sandbox# cat lxc-102.log lxc-start 102 20240623124819.130 INFO confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536 lxc-start 102 20240623124819.130 INFO confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map...
  15. J

    Proxmox Virtual Environment 8.2.2 - LXC High Availibilty after upgrade PVE v7 to v8

    The result: pve2-sandbox# lxc-start -n 102 -F -lDEBUG lxc-start: 102: ../src/lxc/sync.c: sync_wait: 34 An error occurred in another process (expected sequence number 7) lxc-start: 102: ../src/lxc/start.c: __lxc_start: 2114 Failed to spawn container "102" lxc-start: 102...