aaron's latest activity

  • aaron
    aaron replied to the thread PVE new feature from PBS?.
    On PBS you have it for file-based backups (host, CT). On PVE you have that as well, if the backup storage is a PBS. Then you can do that even for VM backups (given that the FS is supported)
  • aaron
    the patch has just been applied. a newer version is not yet released. But once you have a version newer than the mentioned ones, they will include the fix.
  • aaron
    Patches are applied and will be part of the next release of the following packages: qemu-server newer than 9.0.19 pve-container newer than 6.0.9
  • aaron
    Ah sorry, didn't look too closely in which forum the post was... In that case, most of my reply is void ;) SSH and the HTTP API port ist what is needed. So port 22 and 8006, TCP.
  • aaron
    Is the latency in the single digit ms range? Do you have a stable, redundant connection? Can you provide a 3rd vote? E.g. Qdevice? These questions are especially important if you want to use HA. Otherwise, two single nodes might be the better...
  • aaron
    We have customers who do run 5-node full-mesh clusters, for example with 4x 25Gbit NICs. Do not go for a ring topology as that could break in 2 places and then you have problems. The Routed with Fallback method is what you want...
  • aaron
    Okay. Just to be sure, the ls output which reported that the pve2-vm directory does not exist, was run on the same node? vm-gravity-2?
  • aaron
    Thanks! This is getting more confusing the more infos I get. The `ls` command earlier was also done on the same node (vm-gravity-2) right? Would it be possible to give us remote SSH access so we can try to debug this directly on that host? This...
  • aaron
    Hey, thanks for the output. Unfortunately I currently can't explain why it ended up doing what it did. Could you please run the following debug build and get the journal? http://download.proxmox.com/temp/pve-cluster-9-rrd-debug-v2/ This one...
  • aaron
    What do you get back if you run info balloon in the "Monitor" submenu of the VM?
  • aaron
    If you set the "is_mountpoint" it should be a dataset. Otherwise the path would not be a mountpoint. Using a dedicated dataset is what I would do. Having that separation give you some benefits, for example, should you ever want to use the...
  • aaron
    Keep in mind that tihs stems from a time where all we had were HDDs. Given that ZFS is copy on write, the data will fragment over time, and if the HDD is full, it will need more time to find unused space on the disk. With SSDs where the seek time...
  • aaron
    Doesnt even look too bad. One more thing you need to aware of, is that `zpool` will show you raw storage and `zfs` usable. As in, IIUC, you have 3x 480G SSDs in that raidz1 pool. The overall Used + AVAIL in the `zfs list` ouput for the pool...
  • aaron
    VMs are stored in datasets of type volume (zvol) which provide blockdevs. In any raidz pool they need to store parity block as well. That is most likely what eats away the additional space. How much it is depends on the raidzX level, the...
  • aaron
    Hmm, the web interface itself runs in your local browser. But if you mean that it loads slowly whenever it fetches data from the server, then there could be a few things. The kernel panic doesn't look too good. This could be a hardware problem...
  • aaron
    AFAIU it is considered a tech preview. It is marked as such in the GUI when you create a new storage. Why do we mark it as tech-preview? Because it is a new and major feature that has the potential for edge-cases that are not yet handled well. By...
  • aaron
    thanks for bringing this to our attention. I just sent out a patch to fix this. https://lore.proxmox.com/pve-devel/20250828125810.3642601-1-a.lauterer@proxmox.com/
  • aaron
    aaron replied to the thread [SOLVED] 3-node cluster.
    Yep. Even though the situations sound a bit constructed. But in reality, who knows what sequence of steps might lead to something similar :) If you want to have different device classes and want specific pools to make use of only one, you need...
  • aaron
    aaron replied to the thread [SOLVED] 3-node cluster.
    That is true... Especially if you don't set the "size" larger than 3. The additional step to distribute it per host is one more failsafe, just in case you have more nodes per room and a pool with a larger size.
  • aaron
    aaron replied to the thread [SOLVED] 3-node cluster.
    One more thing, if you want to prevent people (including yourself ;) ) to change the size property, you can run ceph osd pool set {pool} nosizechange true