Aber nicht bei Hypervisoren. Ich habe große Kunden mit vielen Tausend VMs. Diese Server nutzen alle DHCP. Die einzigen Ausnahmen sind Hypervisor, Backup und Storage. Das aus gutem Grund.
You really need your old cluster map extracted from the OSDs.
As long as you are only deploying a new MON you create a new Ceph cluster. The existing OSDs will not be able to join it.
The ceph.conf file does not matter here. It only tells the...
Yes, migrating the VM to another host and back to the host with the newly enabled KRBD setting will start a new QEMU instance for the VM using KRBD. To check if a VM is using KRBD, these grep will match in each case:
librbd:
qm showcmd VMID |...
Please don’t use BTRFS for real world use cases unless you really don’t care about your data. I highly doubt BTRFS is any faster given the same parameters (proper sync read/writes), in most benchmarks I’ve seen in the past BTRFS is slower. If you...
When using docker in business, I faced the problem that rootless docker is optional and many docker images does not work correctly in rootless mode. I would never ever run docker as root on my PVE host. Another reason is that docker creates lots...
Container virtualization will never replace a hypervisor since a container is dependent of the host kernel therefore you will never be able to run a container which is independent of the host platform.
You need to create the OCFS2 filesystem with "-T vmstore", which creates 1MB clusters for the files.
Each time when a file needs to be enlarged, all nodes have to communicate so that they know about the newly allocated blocks.
With larger cluster...
From kernel 6.5 to at least 6.8 there is an issue with OCFS2 and io_uring that produces IO errors inside the VM.
Unfortunately OCFS2 is not well maintained.
With "# ceph-mon --monmap /root/monmap --keyring /etc/pve/priv/ceph.mon.keyring --mkfs -i nextclouda -m 10.0.1.1" you created a new MON database (--mkfs) and removed all info from the old one, not only the monmap.
You should have just inserted...
You want to enhance your current situation, right?
Are you sure you want Ceph with only three nodes? A reliable system with features like self-healing has some pitfalls - beside the (usually) much lower performance than local storage...
As that ZFS release has some other relatively big changes, e.g. how the ARC is managed, especially w.r.t. interaction with the kernel and marking its memory as reclaimable, we did not felt comfortable with including ZFS 2.3 already, but rather...
We are excited to announce that our latest software version 8.4 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.10 "Bookworm" but uses the Linux kernel 6.8.12 and kernel 6.14 as opt-in, QEMU...
the color scheming of the commands mostly depend on how your environment variables are set up, e.g. important is the "TERM" variable
the commands decide then how to send color info depending on that. which variable is needed/supported depends...
We recently uploaded a 6.14 kernel into our repositories. The current 6.8 kernel will stay the default on the Proxmox VE 8 series, the newly introduced 6.14 kernel is an option.
The 6.14 based kernel may be useful for some (especially newer)...
I've done this a few times already without issues regarding the storage. You must be absolutely careful that the VM isn't started in both clusters at the same time, though. Make sure no "start on boot" or HA configs are in place, so the VM...