Search results

  1. M

    Out of memory

    Thanks, I'm not a ZFS expert, but as far as I can see ARC usage is low: # arc_summary -g ARC: 4.3 GiB (0.9 %) MFU: 3.9 GiB MRU: 196.7 MiB META: 440.8 MiB (377.9 GiB) DNODE 107.5 MiB (37.8 GiB) +----------------------------------------------------------+ |...
  2. M

    Out of memory

    Hi, we experience problems about RAM, expecially on a node that appears to not have very high usage: Despite this we note that some machines go "out of memory": Any ideas? Regards, Matteo
  3. M

    Live migration problems between higher to lower frequencies CPUs

    Hi, we didn't test it because we prefer not to stop the new node with all VMs onto it to update it. Now the plan is to see if we can use other nodes to complete cluster upgrade without any restart of VMs, updating them to kernel 5.19. It will take a couple of weeks, but I'll keep this post...
  4. M

    Live migration problems between higher to lower frequencies CPUs

    Good evening, we experience problems when migrating to hosts between a host with CPU: 64 x Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz (2 Sockets) to a host with CPU: 40 x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (2 Sockets) No problem in the opposite direction. On both host we have the same...
  5. M

    Ballooning

    Yes, this is precisely the problem, it seems to me that it already happens to us at 60%. The image indicates that there are 43.248.196 KB out of 128 GB of RAM occupied by a dummy process created by ballooning. Thanks, we plan to upgrade to Proxmox 7 shortly. # info balloon balloon...
  6. M

    Ballooning

    Good morning, on a node of our cluster with 1TB of RAM occupied at 60% we have a problem with ballooning. Specifically, on a windows server 2019 VM with 128GB of RAM: On the node with Proxmox VE 6.4-13 with Community subscription we see this: $ free total used...
  7. M

    Question about ZFS replace

    Very clear, thanks! But now what do you suggest? Is it better to leave it like it is, also being drive number 4 in the array, or to remove it from the array, re-partition it and insert it again? All drives are operational, now, can I simply do a: zpool offline -f rpool...
  8. M

    What-if I delete index folders?

    Yes, thanks, CPU is low (< 7%) and the throughput, if for example I move a file via rsync from PBS to SAN or vice-versa, is 10 times more than normal activity on PBS
  9. M

    Garbage collection failed

    Hi, sorry, again on garbage collection... The gc proceeded, but at some point this error came out: Garbage collection failed: unlinking chunk "13bc8899da253675362f2821acfb4604575b7ed1377320b6ec86aa05c449f09d" failed on store 'san05' - ENOENT: No such file or directory The task failed and the...
  10. M

    What-if I delete index folders?

    Hmm... I checked and we do not exceed 10% of the line capacity during normal operations (backup, verify, garbage collection).
  11. M

    Question about ZFS replace

    Hi, I put on a physical test machine with 2TB 4HDD and installed PBS on ZFS at setup (RAIDZ1), but then one HDD died after a day (recovery hardware). I replaced the disk and did a "zfs replace": the result is that "zfs status" is now fine, but the new disk has different partitions (two instead...
  12. M

    What-if I delete index folders?

    Ours run on a test system right now, that is a VM with 8 cores (48 x Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz (2 Sockets)), 128GB of RAM, 10Gbps network. The storage is a 100TB space on a SAN via NFS on 10Gbps network. The SAN is a QNAP with Xeon E-2236 CPU (6 cores by 12 threads)and 16 6Gbps...
  13. M

    What-if I delete index folders?

    Hello, in the end the solution was to remove all but one backup for each VM or host from the web interface to lighten the GC and let the GC go on. After 9 days it completed phase 1 and started to free up space with phase 2. For the future: consider the significant inertia of the GC and set an...
  14. M

    What-if I delete index folders?

    Thank you very much! The reason I asked is that I suspect there is one or more corrupted index after a physical problem on the SAN we use. The effect is that the garbage collection stucks at 9% for several days and never finishes (and disk space usage is 96%), so I would like to delete all...
  15. M

    What-if I delete index folders?

    Thanks for you reply, but if I had just for example: 2022-04-23T04:55:08Z/ 2022-04-26T03:48:27Z/ and I delete: 2022-04-23T04:55:08Z/ will I have a consistent (full) backup for the 26th of April? I mean: are in each folder all necessary infos to restore a backup (or full for 23rd of Arpril...
  16. M

    What-if I delete index folders?

    Hello, I see in my datastore I have a structure like that: vm -> VMid -> folders with timestamp name: 2022-04-02T09:33:01Z/ 2022-04-09T01:18:16Z/ 2022-04-12T04:02:06Z/ 2022-04-13T01:57:38Z/ 2022-04-14T02:48:12Z/ 2022-04-15T02:47:03Z/ 2022-04-16T02:41:47Z/ 2022-04-19T04:47:29Z/...
  17. M

    Garbage collection

    Oh... so the reply to my whole message is "no, there is no way to force the emptying of some storage quickly"? :/ Another question: is it possible to keep index file locally (SSD) and chunks on NFS? M.
  18. M

    Garbage collection

    Hello, is there a way to force the emptying of some storage quickly? We have a test installation on PBS on a VM with 8 CPU, 128GB RAM and a 100TB datastore on a SAN shared via NFS (probably not the more efficient solution). Last two GC failed, so now we are reaching the share saturation...
  19. M

    /etc/pve read-only after "pvecm create ProxCluster01"

    Solved following a link I can't post: linux-tips,com!t!couldnt-start-virtual-machines-after-proxmox-4-1-upgrade!317!2 I'll try again with "pvecm create ProxCluster01", now that I have the "antidote", but what if again it doesn't work? Matteo