So, now I know why GFS2 is unsupported from proxmox side... It works and performance was also very good, but it breaks after some time (more than half a year). Then it is pain, when everything is stuck and all pve nodes need to be restarted and...
In my home lab, I renamed the network interfaces using:
pve-network-interface-pinning generate --prefix eth
After that, I deleted the originally generated files from /usr/local/lib/systemd/network/
(in my case the file was named, for example...
Hi using gfs2 a today hit bug:
[2479853.036266] ------------[ cut here ]------------
[2479853.036509] kernel BUG at fs/gfs2/inode.h:58!
[2479853.036721] Oops: invalid opcode: 0000 [#1] PREEMPT SMP PTI
need to restart pve node. :-/
Latest ceph release (20) removes both:
https://ceph.io/en/news/blog/2025/v20-2-0-tentacle-released/#changes
MGR
Users now have the ability to force-disable always-on modules.
The restful and zabbix modules (deprecated since 2020) have been...
When 3 node CEPH cluster is setup using full mesh - routed mode, is it possible to use this ceph network for migration as well?
In GUI I have to select interface, but actually there are two interfaces in this case, each to different node.
I think it is fine, if you will not use zfs.
Personaly I am using mdadm raid 1 (yes I know...), 2x MX500 2TB and LVM on top for VM. Running fine for 3 years.
You can do it by disk passthrough (cli). https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM). Since FC SAN devices usually have multiple paths, you have to configure multi path...