Thanks for your answer.
Those are three forum examples:
https://forum.proxmox.com/threads/cant-restore-vm.97633/#post-437269
https://forum.proxmox.com/threads/problems-at-restoring-a-vm.93367/
https://forum.proxmox.com/threads/unpacking-vm-backup-in-ztsd-format-error.75706/
Of course...
In this forum, there are many cases of backup corruption related to ZSTD. The discussion has never gone deep.
All my backups on different simple XFS volumes are corrupted. This has never happened in the past with gzip or lzo.
Maybe this page could suggest that ZSTD is unsafe to be used...
I'm having the same issue, from an 8TB internal HDD XFS formatted. 4 (of four) backups are unrestorable. I could understand a single corruption but I can't believe 4 backup corruption. That could be a problem with ZST?
The backup process was declared as OK in all the cases by proxmox.
When...
Thanks everybody, I tried everything here but in my case, I'm always getting "missing libcc_s_seh-1.dll"
Renaming:
libgcc_arch_lib
to:
missing libcc_s_seh-1.dll
made the "c:\Program Files\qemu-ga\qemu-ga.exe" -s vss-install work, but proxmox doesn't recognize it.
....
In the end, the final...
ZFS looks slower than anything out of the box in Proxmox.
In my case is unusable. Just moving the virtual disk to a single XFS HDD gots workable results.
The RAIDZ 4 SSD zpool was so slow to put the VM on it, unable to work.
So ZFS is able to make slower what is physically faster.
Otherwise, I...
Sorry. I moved away from Ceph all my VMs. Now that is empty, it sticks on an unhealthy state.
I could figure out that the unbalanced state is due by the SSD I added to unlock its frozen state.
I added the SSD with a USB dock and it was recognized by Ceph as HDD.
The performance was awesome, but...
The Ceph pools are empty, there are no more VMs but there is some log maybe.
The last OSD doesn't want to be stopped, without giving any reason. Proxmox 6.1-7
root@pve01sc:~# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 16.30057 - 16 TiB 2.1 TiB 2.1 TiB 98 KiB 10 GiB 14 TiB 13.08 1.00 - root default
-3 8.15028 - 8.2...
Same problem for me, and it doesn't look to start recovery at all.
root@pve01sc:~# ceph -s
cluster:
id: 56c01ca1-22ee-4bb0-9093-c852ae7d120c
health: HEALTH_ERR
1 full osd(s)
1 pool(s) full
Degraded data redundancy: 535023/1781469 objects...
Thanks for your suggestion, at the age of my last message I ended putting in another 8TB HDD and starting a resilvering that has going on for a week!!!!
Now everything is OK an my pool is made of device by IDs.
Why is the resilvering a so slow process on a modern XEON machine (not so much busy...)?
Hi There, some update.
Stopping the VMs was enough to make me able exporting the pool.
But in some way it has imported it again on its way:
zpool export zfs3x8TB
root@pve01sc:~# zpool status
pool: zfs3x8TB
state: DEGRADED
scan: resilvered 6.97G in 1 days 15:50:08 with 0 errors on Fri Feb...
Thanks for your advice and remarks.
I started using ZFS many years ago with Nas4free, I know it is a full-featured, very interesting project. The story achieved from years of ZFS related forum reading also taught that when the zpoll is the boot device it could become a full-featured nightmare...
Hi Fabian, thanks for your suggestion.
How do I put offline the pool correctly?
Stopping VMS ok, but then?
Are you thinking to edit Proxmox GUI to build the pools directly by disk IDs?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.