In my home lab, I renamed the network interfaces using:
pve-network-interface-pinning generate --prefix eth
After that, I deleted the originally generated files from /usr/local/lib/systemd/network/
(in my case the file was named, for example...
Hi using gfs2 a today hit bug:
[2479853.036266] ------------[ cut here ]------------
[2479853.036509] kernel BUG at fs/gfs2/inode.h:58!
[2479853.036721] Oops: invalid opcode: 0000 [#1] PREEMPT SMP PTI
need to restart pve node. :-/
Latest ceph release (20) removes both:
https://ceph.io/en/news/blog/2025/v20-2-0-tentacle-released/#changes
MGR
Users now have the ability to force-disable always-on modules.
The restful and zabbix modules (deprecated since 2020) have been...
When 3 node CEPH cluster is setup using full mesh - routed mode, is it possible to use this ceph network for migration as well?
In GUI I have to select interface, but actually there are two interfaces in this case, each to different node.
I think it is fine, if you will not use zfs.
Personaly I am using mdadm raid 1 (yes I know...), 2x MX500 2TB and LVM on top for VM. Running fine for 3 years.
You can do it by disk passthrough (cli). https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM). Since FC SAN devices usually have multiple paths, you have to configure multi path...
This official doc https://docs.ceph.com/en/reef/mgr/zabbix/ can be used, but instructions are not correct. Especially
This is not optional, but required step.
Also, Plugins.Ceph.InsecureSkipVerify=true in zabbix_agent2.conf is required. Guide...
I've been hitting this on multiple disks, not just small one (efi). Seem root cause was running vm have cpu type 'host' but not identical CPUs were in cluster. Fixed by setting different cpu profile for vm (x86-64-v4).
This error is very...
I have manually cleared (lvremove) invalid snapshots & updated to latest from pve-test. Then I had to set "10.0+pve1" as machine version. After this, create snapshots is working again. If they break it again on current I will open Bugzilla ticket.
I have setup one PVE host with one LUN from SAN and users are testing Snapshots as Volume-Chain feature.
Then have broken it - they are QA, so it is their job.
There are no snapshots on vm:
root@pve04:~# qm listsnapshot 104
`-> current...
Found reddit thread about this, monitoring using htop:
#configure htop display for CPU stats
htop
(hit f2)
Display options > enable detailed CPU Time (system/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest)
select Screens -> main
available columns >...
Hitting this error/problem.
I see this difference with haproxy
https://host/api2/json/cluster/ceph/status
{
"data": null,
"message": "binary not installed: /usr/bin/ceph-mon\n"
}
Without haproxy...