Hi,
Thanks for sharing your work. Upfront, are you aware of the native plugin mechanism that Proxmox VE offers, or are there reasons for developing this plugin as patch of the code over using the defined plugin interface?
See...
Hello all! I'm a developer with OSNEXUS.
We’re excited to announce the pre-release version of the QuantaStor Proxmox Storage Plugin, designed to simplify the management of QuantaStor storage within Proxmox VE clusters. This plugin enables...
With PVE on a shared LUN you use LVM to split it up. That is a block storage.
See https://pve.proxmox.com/wiki/Multipath that also touches the final setup after multipathing (which you will usually have in such setps).
No, that is a complex...
I believe that is what I was referencing in my original post:
I ended up just going with figuring out what packages are installed by pveceph install and installing those directly using apt.
If you need more than RBD (block storage) and CephFS (file storage) from your Ceph cluster you should look into not deploying it with the Proxmox tools.
VMware enables this behavior because it has VMFS - a cluster-aware, shared filesystem. VMFS implements proprietary on-disk locking and reservation mechanisms that safely coordinate multi-initiator access to the same disk.
Proxmox does not...
You cannot have Shared storage in a two node non-cluster installation. Shared storage, as defined by PVE terminology, is part of the cluster.
Everything you do after that is a road to data corruption. There are coordinated operations...
We are running a cluster of 8 Proxmox VE nodes with OCFS2 on a SAN LUN since several years.
It works but it is not officially supported. There were some issues even with OCFS2 in recent kernels as the development of OCFS2 seems to have come to a...
Hi @PelikanNetzwerk , welcome to the forum.
Generally users have reported success using vanilla Debian to bootstrap on iSCSI and then installing PVE on top of it.
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_13_Trixie
There are a...
Hello Nick,
I've always hated the NetApp sales in Germany, who present NetApp as the optimal block storage for VMware.
I always perceived NetApp as the best filer and the Ontap products probably still are. VMware's strong focus on block storage...
Hi Community!
The recently released Ceph 20.2 Tentacle is now available on the Proxmox Ceph test repository for installation or upgrade.
Upgrades from Squid to Tentacle:
You can find the upgrade how-to here...
Happy New Year! Sorry for the delay on this... we wanted to do a proper analysis.
Both cache=direct and cache=writethrough provide substantially stronger consistency guarantees than cache=none. If you are using QCOW/LVM and consistency is a...
Du hast völlig Recht. Ich dachte fälschlicherweise immer clean stecke in apt autoremove drin.
Aber clean ist nötig um fette Leichen, eher mikochirurgisch als mittels Holzhammer, endgültig zu beerdigen.
Danke.
No, as soon as there is no quorum any more between the MONs, i.e. the majority of MONs do not see each other, the cluster will stop working.
And for the number of MONs: The cephadm orchestrator deploys 5 by default for the reasons I outlined...
The current recommendation from the Ceph project is to run 5 MONs.
With only three MONs you run into a high risk situation after losing just one MON. Losing another and your cluster stops.
With five MONs you can loose two and the cluster will...