Search results

  1. S

    ZSTD backups always corrupted

    Thanks for your answer. Those are three forum examples: https://forum.proxmox.com/threads/cant-restore-vm.97633/#post-437269 https://forum.proxmox.com/threads/problems-at-restoring-a-vm.93367/ https://forum.proxmox.com/threads/unpacking-vm-backup-in-ztsd-format-error.75706/ Of course...
  2. S

    ZSTD backups always corrupted

    In this forum, there are many cases of backup corruption related to ZSTD. The discussion has never gone deep. All my backups on different simple XFS volumes are corrupted. This has never happened in the past with gzip or lzo. Maybe this page could suggest that ZSTD is unsafe to be used...
  3. S

    Cant Restore VM

    This article could suggests that ZSTD compression should be avoided https://github.com/facebook/zstd/issues/2852
  4. S

    Cant Restore VM

    I'm having the same issue, from an 8TB internal HDD XFS formatted. 4 (of four) backups are unrestorable. I could understand a single corruption but I can't believe 4 backup corruption. That could be a problem with ZST? The backup process was declared as OK in all the cases by proxmox. When...
  5. S

    [SOLVED] QEMU guest agent installation issue

    Thanks everybody, I tried everything here but in my case, I'm always getting "missing libcc_s_seh-1.dll" Renaming: libgcc_arch_lib to: missing libcc_s_seh-1.dll made the "c:\Program Files\qemu-ga\qemu-ga.exe" -s vss-install work, but proxmox doesn't recognize it. .... In the end, the final...
  6. S

    Proxmox node name change

    ZFS is also an issue .... Please do you have a quick fix?
  7. S

    How to organise drives (passthrough, snapraid, etc)

    Hi JohnTanner how it's gone your overall experience?
  8. S

    Strange disk performence in Windows guest

    ZFS looks slower than anything out of the box in Proxmox. In my case is unusable. Just moving the virtual disk to a single XFS HDD gots workable results. The RAIDZ 4 SSD zpool was so slow to put the VM on it, unable to work. So ZFS is able to make slower what is physically faster. Otherwise, I...
  9. S

    [SOLVED] cannot start ha resource when ceph in health_warn state

    Sorry. I moved away from Ceph all my VMs. Now that is empty, it sticks on an unhealthy state. I could figure out that the unbalanced state is due by the SSD I added to unlock its frozen state. I added the SSD with a USB dock and it was recognized by Ceph as HDD. The performance was awesome, but...
  10. S

    Remove Ceph

    The Ceph pools are empty, there are no more VMs but there is some log maybe. The last OSD doesn't want to be stopped, without giving any reason. Proxmox 6.1-7
  11. S

    [SOLVED] cannot start ha resource when ceph in health_warn state

    root@pve01sc:~# ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 16.30057 - 16 TiB 2.1 TiB 2.1 TiB 98 KiB 10 GiB 14 TiB 13.08 1.00 - root default -3 8.15028 - 8.2...
  12. S

    [SOLVED] cannot start ha resource when ceph in health_warn state

    RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 15 TiB 13 TiB 2.3 TiB 2.3 TiB 15.47 ssd 1.3 TiB 831 GiB 507 GiB 510 GiB 38.03 TOTAL 16 TiB 13 TiB 2.8 TiB 2.8 TiB...
  13. S

    [SOLVED] cannot start ha resource when ceph in health_warn state

    I added another 480GB SSD. Now the pool is about 480GB * 4. The actual busy space is less than 70GB. The CEPH storage now is stuck in this way:
  14. S

    [SOLVED] cannot start ha resource when ceph in health_warn state

    Well, ok, there is a way to make CEPH recover, just one time?
  15. S

    [SOLVED] cannot start ha resource when ceph in health_warn state

    Same problem for me, and it doesn't look to start recovery at all. root@pve01sc:~# ceph -s cluster: id: 56c01ca1-22ee-4bb0-9093-c852ae7d120c health: HEALTH_ERR 1 full osd(s) 1 pool(s) full Degraded data redundancy: 535023/1781469 objects...
  16. S

    ZFS Faulted Drive. Looking for help.

    3pc Ironwolf 8TB mobo SATA connected Xeon 1230v6 32GB RAM less than 4TB used in the pool
  17. S

    ZFS Faulted Drive. Looking for help.

    Thanks for your suggestion, at the age of my last message I ended putting in another 8TB HDD and starting a resilvering that has going on for a week!!!! Now everything is OK an my pool is made of device by IDs. Why is the resilvering a so slow process on a modern XEON machine (not so much busy...)?
  18. S

    ZFS Faulted Drive. Looking for help.

    Hi There, some update. Stopping the VMs was enough to make me able exporting the pool. But in some way it has imported it again on its way: zpool export zfs3x8TB root@pve01sc:~# zpool status pool: zfs3x8TB state: DEGRADED scan: resilvered 6.97G in 1 days 15:50:08 with 0 errors on Fri Feb...
  19. S

    ZFS Faulted Drive. Looking for help.

    Thanks for your advice and remarks. I started using ZFS many years ago with Nas4free, I know it is a full-featured, very interesting project. The story achieved from years of ZFS related forum reading also taught that when the zpoll is the boot device it could become a full-featured nightmare...
  20. S

    ZFS Faulted Drive. Looking for help.

    Hi Fabian, thanks for your suggestion. How do I put offline the pool correctly? Stopping VMS ok, but then? Are you thinking to edit Proxmox GUI to build the pools directly by disk IDs?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!