Search results

  1. G

    Tuning ZFS 4+2 RAIDZ2 parameters to avoid size multiplication

    Hi, On a machine with 6x4 TB hdd I installed PVE 6.4 (up to date) choosing RAIDZ2 (ashift left to default 12), and this should leave 4x4=16 TB or 14.1 TiB usable. # zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 1 days 00:35:33 with 0 errors on Mon Sep 13 00:59:36 2021...
  2. G

    Upgrade of pve-qemu-kvm and running VM

    Hi, I'm about to dist-upgrade PVE and pve-qemu-kvm will go from 5.2.0-2 (with the qmp timeout / VM freeze issue starting to hit us) to 5.2.0-6 (current stable). Once the dist-upgrade is done is there a way to "migrate" a running VM from the host it's running to the same host so that I runs...
  3. G

    Migrating a real machine with NVME disk to PVE VM

    Yes I confirm PVE sees both disks the one from args and the small one for OVMF. Thanks!
  4. G

    Migrating a real machine with NVME disk to PVE VM

    Hi, Today I had to migrate a physical machine running Fedora Core 32 with NVME root disk and UEFI boot to a virtual machine (OVMF) under PVE 6.3. Unsurprisingly Fedora boot failed if I just copied (offline using an adapter) the whole /dev/nvme0n1 original block device to a ZFS volume and used...
  5. G

    mtime use for large images to save a reread on backup?

    While checking the logs of my PBS I noticed the longest PVE VM backup job was of a stopped VM (been in stopped state for a few weeks / daily PBS backups) which has a relatively large disk raw image (200G) on a relatively slow directory (NFS mounted by /etc/fstab not by PVE). Is there a way to...
  6. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    Yes as I mentionned the PVE host has RAIDZ2 with 6x4TB hdd. Do you know in what log I can see history of cleanup jobs?
  7. G

    All VMs locking up after latest PVE update

    Note that I had relatively similar looking issue there with PVE running a PBS VM: https://forum.proxmox.com/threads/proxmox-backup-client-gets-http-2-0-after-70mn-pbs-server-crashed.85312/#post-375257 It also has an old CPU Atom C2550 (like other posters here) and I had qmp failed messages too...
  8. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    Got an OOM again, trying now with no swap on the PBS VM (swap was set up by the installer).
  9. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    With 5.4.101 on both host and guest so far no issue. VM file cache size has reached 11G and has been sitting there for a while, I'm relaunching previously failed backups.
  10. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    * last cat /proc/meminfo; ps fauxwwww on the backup1 (PBS) VM ========= Fri 05 Mar 2021 08:44:34 AM CET ======== MemTotal: 12264260 kB MemFree: 162436 kB MemAvailable: 11107408 kB Buffers: 230920 kB Cached: 10694904 kB SwapCached: 20 kB Active...
  11. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    I've got some more information : my setup is a physical server "pcstorage1" (Atom C2550, 16G ECC RAM, 6 4TB hdd) with PVE 6.3 RAIDZ2 UEFI boot with only one VM "backup1" running PBS 1.0.8 which has 12G RAM and 10TB ext4 UEFI system (virtio-scsi) I got the crash issue again on a large VM disk...
  12. G

    proxmox-backup-client gets HTTP/2.0 after 70mn, PBS server crashed?

    Hi, While using PBS 1.0.8 (server and client) to backup a directory with old VM images: proxmox-backup-client backup dir1.pxar:/mnt/old --verbose ... append chunks list len (64) append chunks list len (64) "dir1/vm1" "dir1/vm1/sdb.img" append chunks list len (64) ... append chunks list len...
  13. G

    Is it safe to upgrade a root ZFS pool with OpenZFS 2.0.3-pve1?

    UEFI install so no GRUB, only systemd-boot AFAIK: root@x:~# efibootmgr -v BootCurrent: 0004 ... Boot0003* Linux Boot Manager HD(2,GPT,755464f2-9b00-4f2d-9a81-98a455a69cc7,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi) Boot0004* Linux Boot Manager...
  14. G

    Is it safe to upgrade a root ZFS pool with OpenZFS 2.0.3-pve1?

    I did a fesh install of PVE 6.3 (UEFI, ZFS RAID1), then update/dist-upgrade, reboot, then I did zfs upgrade rpool : root@x:~# zpool version zfs-2.0.3-pve1 zfs-kmod-2.0.3-pve1 root@x:~# zpool upgrade This system supports ZFS pool feature flags. All pools are formatted using feature flags...
  15. G

    [SOLVED] ZFS storage "Detail" produces "Result verification failed (400)" error

    Yes it helps a lot, thanks! (we're all UEFI boot without grub, I'll test and report in the other thread)
  16. G

    [SOLVED] ZFS storage "Detail" produces "Result verification failed (400)" error

    Got the same web UI issue, and scrub fixed it, thanks! While looking a zpool status I noticed a suggestion about running zpool upgrade: pool: rpool state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable...
  17. G

    When a cluster node is lost is it possible to restart its VM on another node?

    Hi, thanks for your answer. I read this document and I'm not sure I'll be able to test a realistic set of conditions with HA as proposed as my time is limited, that's why I asked how to "manually" restart a VM from a known for sure failed/offline node on another one. On our current non proxmox...
  18. G

    When a cluster node is lost is it possible to restart its VM on another node?

    Hi, I'm testing some edge cases with proxmox VE 6.3: I have a cluster p1 with three nodes node1 node2 node3, all use only a shared NFS (from another machine outside the cluster) for VM disk storage. VM 100 is running on node1. node2 and node3 have no VM running. Let's assume node1 fails...
  19. G

    Removing vlan id 1 from a trunk

    I lightly tested the following patch and it seemed to work for my trunk VM port and not break my other VM: root@nuc3:/usr/share/perl5/PVE# diff -u Network.pm.orig Network.pm; diff -u QemuServer.pm.orig QemuServer.pm --- Network.pm.orig 2021-02-02 20:19:17.454498452 +0100 +++ Network.pm...
  20. G

    Removing vlan id 1 from a trunk

    Yes but if we define proper trunks for VM ports it doesn't get through. Since proxmox UI doesn't allow for specifying ifupdown2 advanced VLAN options and you have to edit manually /etc/network/interfaces to for example select some VLANs instead of all it's not proxmox responsability to use the...