Search results

  1. aaron

    ZFS Pool Usage Reporting Higher than Actual VM Disk Usage

    Doesnt even look too bad. One more thing you need to aware of, is that `zpool` will show you raw storage and `zfs` usable. As in, IIUC, you have 3x 480G SSDs in that raidz1 pool. The overall Used + AVAIL in the `zfs list` ouput for the pool itself, is around ~860G. With 1 disk parity in mind...
  2. aaron

    ZFS Pool Usage Reporting Higher than Actual VM Disk Usage

    VMs are stored in datasets of type volume (zvol) which provide blockdevs. In any raidz pool they need to store parity block as well. That is most likely what eats away the additional space. How much it is depends on the raidzX level, the volblocksize of the zvol and the ashift. See...
  3. aaron

    Sluggish webui after upgrade to PVE 9

    Hmm, the web interface itself runs in your local browser. But if you mean that it loads slowly whenever it fetches data from the server, then there could be a few things. The kernel panic doesn't look too good. This could be a hardware problem. Try to update the BIOS/firmware of the...
  4. aaron

    Proxmox Virtual Environment 9.0 released!

    AFAIU it is considered a tech preview. It is marked as such in the GUI when you create a new storage. Why do we mark it as tech-preview? Because it is a new and major feature that has the potential for edge-cases that are not yet handled well. By releasing it to the public, we hope to get...
  5. aaron

    Proxmox Virtual Environment 9.0 released!

    thanks for bringing this to our attention. I just sent out a patch to fix this. https://lore.proxmox.com/pve-devel/20250828125810.3642601-1-a.lauterer@proxmox.com/
  6. aaron

    [SOLVED] 3-node cluster

    Yep. Even though the situations sound a bit constructed. But in reality, who knows what sequence of steps might lead to something similar :) If you want to have different device classes and want specific pools to make use of only one, you need to add one more step to match the device class. For...
  7. aaron

    [SOLVED] 3-node cluster

    That is true... Especially if you don't set the "size" larger than 3. The additional step to distribute it per host is one more failsafe, just in case you have more nodes per room and a pool with a larger size.
  8. aaron

    [SOLVED] 3-node cluster

    One more thing, if you want to prevent people (including yourself ;) ) to change the size property, you can run ceph osd pool set {pool} nosizechange true
  9. aaron

    [SOLVED] 3-node cluster

    Should you ever plan to have more nodes per room, the following CRUSH rule would be better, as it makes sure that replicas need to end up on different hosts: rule replicate_3rooms { id {RULE ID} type replicated step take default step choose firstn 0 type room step chooseleaf...
  10. aaron

    [SOLVED] 3-node cluster

    Name Size Min Size main_3 2 2 There you go. That pool has a size of 2. That means, that some PGs only have one replica present because the only other one was on the lost node. Ceph should recover those once the DOWN OSDs are set to OUT (should happen after 10 min automatically)...
  11. aaron

    [SOLVED] 3-node cluster

    The problem is this: pgs: 64.341% pgs not active 793382/2444972 objects degraded (32.450%) 83 undersized+degraded+peered Some PGs are not active, and therefore you have IO issues. Was the cluster healthy before on node/room went down? Could you also post the...
  12. aaron

    [SOLVED] 3-node cluster

    Well, as others mentioned, if one node is down, The Ceph MONs and Proxmox VE nodes should still have a quorum with 2 out of 3. Datawise, if you have set size/min_size to 3/2 in all the pools, things should keep working as you should still have 2 of 3 replicas. The question is, in what state was...
  13. aaron

    syslog getting spammed with "notice: RRD update error"s

    Hmm. It seems that the detection of which files or directories are present in the /var/lib/rrdcached/db directory is coming to wrong conclusions. Would you mind posting the output of the following command? for i in pve2-vm pve-vm-9.0; do echo "#### ${i}:" && ls -l /var/lib/rrdcached/db/$i; done
  14. aaron

    Windows Guest memory utilization problem on PVE 9.0.x

    See https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher Is the Ballooning Device enabled and is the BallooningService running?
  15. aaron

    VM verschwunden nach cluster Auflösung

    Grundsätzlich klingt das ein wenig seltsam was da gelaufen ist. Schau mal nach ob du noch /etc/pve/nodes/{alte nodes}/qemu-server Ordner hast und dort nicht die configs noch da sind. Dann kanns du sie mit mv in den richtigen schieben.
  16. aaron

    syslog getting spammed with "notice: RRD update error"s

    To get more debug output from the processing side, can you please install the following build of pve-cluster? http://download.proxmox.com/temp/pve-cluster-9-rrd-debug/ wget http://download.proxmox.com/temp/pve-cluster-9-rrd-debug/pve-cluster_9.0.6%2Bdebug-rrd-1_amd64.deb wget...
  17. aaron

    syslog getting spammed with "notice: RRD update error"s

    Thanks. That looks good and is as it should be. So I will have to take a look at the code that is receiving and processing that data.
  18. aaron

    syslog getting spammed with "notice: RRD update error"s

    This is curious. Would it be okay for you to gather a bit more information? Because it seems that for some reason, the pvestatd service still collects and distributes the old pre PVE 9 metric format, but under the new key... So to further see what might be going on, could you please do the...
  19. aaron

    OPNSense VM bei 100% Arbeitsspeicherverbrauch nach Update auf 9.05

    Siehe auch https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher
  20. aaron

    100% Swap Usage

    Swap is more than just an escape for low memory: https://chrisdown.name/2018/01/02/in-defence-of-swap.html But given that the host has ~185G of memory, you could consider disabling swap, as that is a lot of memory, and if you run out of memory, those 8G of swap are most likely not enough...