We use manually:
https://microsoft.github.io/CSS-Exchange/Databases/VSSTester/
for Clean Up / Purging the Exchange Transaction Logs after Proxmox VM Snapshot Backups
Hello Richard,
We have successfully completed the infrastructure upgrade.
GPU passthrough also works so far on the 10 machines under kernel 5.15
We only had problems starting UEFI VMs with Ceph storage for a short time.
The error "rbd_cache_policy=writeback: invalid conf option...
Running the migrated VMs from the 12 nodes all on the thirteenth node is not intended.
The thirteenth node is only used to take over the "VM (KVM) profile settings" and "VM (block) data" in the meantime, so that a data transfer (to CEPH) from the local storage (ZFS) of the nodes takes place...
Hello all,
We are currently running a Proxmox cluster consisting of 12 nodes with PVE 6.4 (latest patches) and local ZFS storage.
In the future, the PVE cluster will be fully connected using Ceph storage.
To migrate the infrastructure to the new PVE version 7.3, we wanted to use our external...
@YAGA
- add SSDs / NVMEs to the nodes
- create a "replicated_rule" based on "device-class" and move the "cephfs_metadata" pool to the SSDs/NVMEs
Maybe this will speed up your CephFS "a bit".
@hthpr
I read now often in the forum that there are problems with the 5.15 kernel (should also be problems with the pci / gpu passthrough).
The changelog of 5.15.49 is long, https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.15.49
if necessary it is worth to search for "scheduler" fixes...
You don't need to replicate the "payload" data of the VM itself, you have the shared Ceph RBD storage for that.
You only need to run the VM in the Proxmox cluster in HA mode.
(All other data cluster solutions like DRBD or GlusterFS are unnecessary)
2. CephFS is very resource hungry due to the metadata handling of the MDS, the setup should be well tuned, this requires good planning with division into HDD, SSD and NVME device classes and offloading of the Wal+DB.
Yes CephFS has the advantage that you can mount it multiple times directly...
We have had a similar problem, we could not solve it: https://forum.proxmox.com/threads/extremely-slow-ceph-storage-from-over-60-usage.101051/
Therefore, a few comments from my side:
1. With 4 nodes you should not have a monitor or standby MDS active on each node, if one node fails, the...
Wir haben einen "Labor" Ceph Object Storage, welcher aus einem 4x-Multinode-Server und den folgenden Node-Komponenten besteht:
Pro Node:
PVE Manager Version pve-manager/7.1-7/df5740ad
Kernel Version Linux 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100)
24 x Intel(R)...
We have a "Lab" Ceph Object Storage consisting of a 4x Multinode Server and the following Node components:
Per Node:
PVE Manager Version pve-manager/7.1-7/df5740ad
Kernel Version Linux 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100)
24 x Intel(R) Xeon(R) CPU X5675 @...
Hallo Stoiko,
klingt super.
Anmerkungen:
- Unsere ersten Nodes wurden mit 6.3 aufgesetzt
- Die Installation gestern auf dem Node mit 6.4 war nur ein RAID 1 System mit nachträglich hinzugefügtem Spare
(administrative Nacharbeiten sind in diesem Fall OK)
- Die bevorstehenden Installationen der...
Hallo Community,
wir hatten die letzten 5 Jahre unsere produktive Infrastruktur mit SmartOS (einem Illumos Kernel basierten Hypervisor) https://www.smartos.org/ betrieben und sind jetzt zu Proxmox gewechselt.
Proxmox begeistert uns.
Jedoch ist uns ein suboptimales Partitionierungsschema des...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.