Search results

  1. G

    Procedure for moving two disks ZFS RAIDZ2 from internal SATA to external USB3

    Hi, I have a 6 disks RAIDZ2 pool, all 6 disks are currently on internal SATA. To workaround an issue on two of the SATA ports (1) I'd like to move (until a receive a new SATA PCIe controler card) two disks to an external USB3 enclosure. ZFS when on SATA has "ata-xxx-serialnumber" in front on...
  2. G

    Tuning ZFS 4+2 RAIDZ2 parameters to avoid size multiplication

    To migrate the zvol to an appropriately configured ZFS I was not able to use zfs send/receive because it does not support setting volblocksize: zfs receive -o volblocksize=16k "cannot receive: invalid property 'volblocksize'" Feature request here: https://github.com/openzfs/zfs/issues/8704...
  3. G

    [SOLVED] Asmedia ASM1062 quirk fix available upstream in 5.4.148 (not yet PVE 6.4) and not on 5.11.22 (PVE 7.0 branch)

    Hi, While running PVE 6.4 (kernel 5.4.140-1-pve) on a threadripper system based on AsrockRack TRX40D8-2N2T motherboard we recently lost two SATA SSD drives on kernel write errors, after testing various things (swapping SSD, cables) we found out the issue happens (after a while) only on the two...
  4. G

    Tuning ZFS 4+2 RAIDZ2 parameters to avoid size multiplication

    Yes this is where I read about volblocksize. However it's not clear what value is optimal in my case, 128k ? 1MB ? 4MB (like ceph does I think) ? Also on the following advice: "When doing this, the guest needs to be tuned accordingly and depending on the use case, the problem of write...
  5. G

    Tuning ZFS 4+2 RAIDZ2 parameters to avoid size multiplication

    Hi, On a machine with 6x4 TB hdd I installed PVE 6.4 (up to date) choosing RAIDZ2 (ashift left to default 12), and this should leave 4x4=16 TB or 14.1 TiB usable. # zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 1 days 00:35:33 with 0 errors on Mon Sep 13 00:59:36 2021...
  6. G

    Upgrade of pve-qemu-kvm and running VM

    Hi, I'm about to dist-upgrade PVE and pve-qemu-kvm will go from 5.2.0-2 (with the qmp timeout / VM freeze issue starting to hit us) to 5.2.0-6 (current stable). Once the dist-upgrade is done is there a way to "migrate" a running VM from the host it's running to the same host so that I runs...
  7. G

    Migrating a real machine with NVME disk to PVE VM

    Yes I confirm PVE sees both disks the one from args and the small one for OVMF. Thanks!
  8. G

    Migrating a real machine with NVME disk to PVE VM

    Hi, Today I had to migrate a physical machine running Fedora Core 32 with NVME root disk and UEFI boot to a virtual machine (OVMF) under PVE 6.3. Unsurprisingly Fedora boot failed if I just copied (offline using an adapter) the whole /dev/nvme0n1 original block device to a ZFS volume and used...
  9. G

    mtime use for large images to save a reread on backup?

    While checking the logs of my PBS I noticed the longest PVE VM backup job was of a stopped VM (been in stopped state for a few weeks / daily PBS backups) which has a relatively large disk raw image (200G) on a relatively slow directory (NFS mounted by /etc/fstab not by PVE). Is there a way to...
  10. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    Yes as I mentionned the PVE host has RAIDZ2 with 6x4TB hdd. Do you know in what log I can see history of cleanup jobs?
  11. G

    All VMs locking up after latest PVE update

    Note that I had relatively similar looking issue there with PVE running a PBS VM: https://forum.proxmox.com/threads/proxmox-backup-client-gets-http-2-0-after-70mn-pbs-server-crashed.85312/#post-375257 It also has an old CPU Atom C2550 (like other posters here) and I had qmp failed messages too...
  12. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    Got an OOM again, trying now with no swap on the PBS VM (swap was set up by the installer).
  13. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    With 5.4.101 on both host and guest so far no issue. VM file cache size has reached 11G and has been sitting there for a while, I'm relaunching previously failed backups.
  14. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    * last cat /proc/meminfo; ps fauxwwww on the backup1 (PBS) VM ========= Fri 05 Mar 2021 08:44:34 AM CET ======== MemTotal: 12264260 kB MemFree: 162436 kB MemAvailable: 11107408 kB Buffers: 230920 kB Cached: 10694904 kB SwapCached: 20 kB Active...
  15. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    I've got some more information : my setup is a physical server "pcstorage1" (Atom C2550, 16G ECC RAM, 6 4TB hdd) with PVE 6.3 RAIDZ2 UEFI boot with only one VM "backup1" running PBS 1.0.8 which has 12G RAM and 10TB ext4 UEFI system (virtio-scsi) I got the crash issue again on a large VM disk...
  16. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    Hi, While using PBS 1.0.8 (server and client) to backup a directory with old VM images: proxmox-backup-client backup dir1.pxar:/mnt/old --verbose ... append chunks list len (64) append chunks list len (64) "dir1/vm1" "dir1/vm1/sdb.img" append chunks list len (64) ... append chunks list len...
  17. G

    Is it safe to upgrade a root ZFS pool with OpenZFS 2.0.3-pve1?

    UEFI install so no GRUB, only systemd-boot AFAIK: root@x:~# efibootmgr -v BootCurrent: 0004 ... Boot0003* Linux Boot Manager HD(2,GPT,755464f2-9b00-4f2d-9a81-98a455a69cc7,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi) Boot0004* Linux Boot Manager...
  18. G

    Is it safe to upgrade a root ZFS pool with OpenZFS 2.0.3-pve1?

    I did a fesh install of PVE 6.3 (UEFI, ZFS RAID1), then update/dist-upgrade, reboot, then I did zfs upgrade rpool : root@x:~# zpool version zfs-2.0.3-pve1 zfs-kmod-2.0.3-pve1 root@x:~# zpool upgrade This system supports ZFS pool feature flags. All pools are formatted using feature flags...
  19. G

    [SOLVED] ZFS storage "Detail" produces "Result verification failed (400)" error

    Yes it helps a lot, thanks! (we're all UEFI boot without grub, I'll test and report in the other thread)
  20. G

    [SOLVED] ZFS storage "Detail" produces "Result verification failed (400)" error

    Got the same web UI issue, and scrub fixed it, thanks! While looking a zpool status I noticed a suggestion about running zpool upgrade: pool: rpool state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable...