Alright, I get it now.
I migrated all LXC's succesfully to RBD storage and will alwaus use RBD backed storage for VM & LXC's right now.
But I now have an issue concerning disk migration, with the Move disk to storage option on a vm under VMID > Hardware > Hard Disk and then selecting Disk...
Thanks, I got it now, I'm sorry for this mistake, should have known it..
Well, I made the RBD storage on the sandbox cluster before creating it on the production cluster, restored backup of an LXC on the RBD storage booted and worked. Did a live migration and this also works on the latest PVE...
Well, I tried with the command and then the rbd_lxc storage shows up in the GUI.
pve1-sandbox# pvesm add rbd rbd_lxc -pool pve_rbd_pool -data-pool pve_rbd_data_pool
When I create the RBD storage by navigating to Datacenter > Storage > Add > RBD, I need to select a Pool but can't create one over...
Thanks, yes, that's true ;)
Well, I created it the storage with the following commands:
pve1-sandbox# cp /etc/ceph/ceph.conf /etc/pve/priv/ceph/pve_rbd.conf
pve1-sandbox# cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/pve_rbd.keyring
pve1-sandbox# pvesm add rbd rbd_lxc -pool...
Thank you for your fast reply!
Just to be 100% sure, should I create the RBD storage with the option "Use Proxmox VE managed hyper-converged ceph pool" on or off?
What's the difference between these?
I attached the options show on the screenshots.
Owkay, I can try this in our sandbox environment.
Can I create RBD storage on top of an existing ceph cluster on the PVE nodes itself? I'm unexpierenced with RBD...
I found this command to create it on a pve node:
pvesm add rbd <storage-name> –pool <replicated-pool> –data-pool <ec-pool>
And via...
I did a test on one production server with the correct setup for CEPH (no hardware RAID).
Upgraded from PVE 7.4-18 to PVE 8.2.4.
We also have Debian 12 LXC's and with HA migration to the node with the latest version of PVE, it doesn't want to start.
I tested with a Debian LXC with id 102:
task...
Thank you for reply.
I know CEPH and Raid is not a good option, but on these older servers, I was unable to remove the RAID card and add a passtrough card.
I didn't configure any RAID setup on the hardware RAID controller, but I placed every disk in a bypass mode supported by the RAID card.
For...
Yes, the LXC101 was now running succesfully, I didn't want to break it again.
Because I was expierincing the same issue with LXC 102, I tried the same steps. But it seems this one has other problems?
What I now found out, on PVE2-sandbox disk 4 has a SMART failure, so I guess that disk is...
The result:
pve2-sandbox# lxc-start -n 102 -F -lDEBUG
lxc-start: 102: ../src/lxc/sync.c: sync_wait: 34 An error occurred in another process (expected sequence number 7)
lxc-start: 102: ../src/lxc/start.c: __lxc_start: 2114 Failed to spawn container "102"
lxc-start: 102...
Thanks for your reply!
I tried the following:
pve3-sandbox# pct fsck 101
fsck from util-linux 2.38.1
/mnt/pve/cephfs/vm-lxc-storage/images/101/vm-101-disk-1.raw:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really...
I'm using the following version on all LXC's:
# cat /etc/debian_version
12.4
All hosts in the cluster and LXC's are running the same version.
The LXC's only want to boot on pve3-sandbox after a restore from backup.
Even after a restore from a backup they don't boot on pve1-sandbox &...
Yes, I'm sure, I updates the 3 nodes on the same day from the same repo's (confirmed and posted only once below):
pve:~# cat /etc/apt/sources.list
deb http://ftp.be.debian.org/debian bookworm main contrib
deb http://ftp.be.debian.org/debian bookworm-updates main contrib
# PVE...
Hi
Thank you for your reply.
Without doing anything on the LXC's itself or when they are not in HA mode, the LXC crashes after 2 days also on the same host were it run well brefore the migration to PVE8.
I don't know if this information is relevant, but I wanted to mention it.
Below you find...
Dear members of Proxmox forum
I have a question about HA issues of LXC's happening after the upgrade of Proxmox VE from version 7.4-18 to 8.2.2.
We have multiple Debian 12 LXC's running on our PVE clusters, One PVE cluster as a development environment and one as a production environment.
I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.