Hey,
thanks for the output. Unfortunately I currently can't explain why it ended up doing what it did.
Could you please run the following debug build and get the journal? http://download.proxmox.com/temp/pve-cluster-9-rrd-debug-v2/
This one...
If you set the "is_mountpoint" it should be a dataset. Otherwise the path would not be a mountpoint.
Using a dedicated dataset is what I would do. Having that separation give you some benefits, for example, should you ever want to use the...
Keep in mind that tihs stems from a time where all we had were HDDs. Given that ZFS is copy on write, the data will fragment over time, and if the HDD is full, it will need more time to find unused space on the disk. With SSDs where the seek time...
Doesnt even look too bad. One more thing you need to aware of, is that `zpool` will show you raw storage and `zfs` usable. As in, IIUC, you have 3x 480G SSDs in that raidz1 pool.
The overall Used + AVAIL in the `zfs list` ouput for the pool...
VMs are stored in datasets of type volume (zvol) which provide blockdevs. In any raidz pool they need to store parity block as well. That is most likely what eats away the additional space. How much it is depends on the raidzX level, the...
Hmm, the web interface itself runs in your local browser. But if you mean that it loads slowly whenever it fetches data from the server, then there could be a few things.
The kernel panic doesn't look too good.
This could be a hardware problem...
AFAIU it is considered a tech preview. It is marked as such in the GUI when you create a new storage. Why do we mark it as tech-preview? Because it is a new and major feature that has the potential for edge-cases that are not yet handled well. By...
thanks for bringing this to our attention.
I just sent out a patch to fix this. https://lore.proxmox.com/pve-devel/20250828125810.3642601-1-a.lauterer@proxmox.com/
Yep. Even though the situations sound a bit constructed. But in reality, who knows what sequence of steps might lead to something similar :)
If you want to have different device classes and want specific pools to make use of only one, you need...
That is true... Especially if you don't set the "size" larger than 3.
The additional step to distribute it per host is one more failsafe, just in case you have more nodes per room and a pool with a larger size.
One more thing, if you want to prevent people (including yourself ;) ) to change the size property, you can run
ceph osd pool set {pool} nosizechange true
Should you ever plan to have more nodes per room, the following CRUSH rule would be better, as it makes sure that replicas need to end up on different hosts:
rule replicate_3rooms {
id {RULE ID}
type replicated
step take default...
Name Size Min Size
main_3 2 2
There you go. That pool has a size of 2. That means, that some PGs only have one replica present because the only other one was on the lost node.
Ceph should recover those once the DOWN OSDs are set to...
The problem is this:
pgs: 64.341% pgs not active
793382/2444972 objects degraded (32.450%)
83 undersized+degraded+peered
Some PGs are not active, and therefore you have IO issues.
Was the cluster healthy before...
Well, as others mentioned, if one node is down, The Ceph MONs and Proxmox VE nodes should still have a quorum with 2 out of 3.
Datawise, if you have set size/min_size to 3/2 in all the pools, things should keep working as you should still have 2...
Hmm. It seems that the detection of which files or directories are present in the /var/lib/rrdcached/db directory is coming to wrong conclusions.
Would you mind posting the output of the following command?
for i in pve2-vm pve-vm-9.0; do echo...
See https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher
Is the Ballooning Device enabled and is the BallooningService running?
Grundsätzlich klingt das ein wenig seltsam was da gelaufen ist.
Schau mal nach ob du noch /etc/pve/nodes/{alte nodes}/qemu-server Ordner hast und dort nicht die configs noch da sind. Dann kanns du sie mit mv in den richtigen schieben.
To get more debug output from the processing side, can you please install the following build of pve-cluster?
http://download.proxmox.com/temp/pve-cluster-9-rrd-debug/
wget...