Search results

  1. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    I cannot replicate it now. It was must likely a combination of a VM started with different versions of kernel and various packages which for some reason was not able to start again ;-(
  2. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    Last working kernel: pve-kernel-5.3.13-3-pve:amd64 I will try installing this kernel and see if it fixes the problem.
  3. mir

    ZFS over iSCSI Network HA

    Since LACP is a pure network thing and completely unrelated to iSCSI it is working 100%.
  4. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    No, same error: lvchange -ay qnap/vm-153-disk-0 device-mapper: create ioctl on qnap-vm--153--disk--0 LVM-RCevIXI8i5huDYro1QZ0fdlcZWWqYxm7DIe1JfkZcJK5iskK4TCa7rX8g5Kvwi3c failed: Device or resource busy
  5. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    This specific setup has been working unchanged since Proxmox 1.9. Did you test with an LVM marked 'Shared'? esx1:~# lvs |grep qnap vm-153-disk-0 qnap -wi------- 8.00g esx2:~# lvs |grep qnap vm-153-disk-0 qnap -wi-ao---- 8.00g esx1:~# lvdisplay qnap --- Logical volume --- LV Path...
  6. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    Hi all, The previous upgrade seems to have broken online and offline migration with shared lvm storage on iscsi. This is the upgrade: Start-Date: 2020-02-11 01:24:26 Commandline: apt upgrade Requested-By: mir (1000) Install: pve-kernel-5.3.18-1-pve:amd64 (5.3.18-1, automatic) Upgrade...
  7. mir

    proxmox 5.4 to 6.1

    I was following these instructions and it was also these instructions I was referring to.
  8. mir

    proxmox 5.4 to 6.1

    Hi all, Today I took the plunge and upgraded my cluster from 5.4 to 6.1 using apt update, apt dist-upgrade. The upgrade in itself was painless however on small annoyance when upgrading the corosync qdevice package as explained here...
  9. mir

    ZFS over iSCSI on Synology

    The disk format must be Qcow2 to be able to take snapshots and you have probably chosen raw.
  10. mir

    Zfs over iscsi plugin

    Only if you install and configure multipathd
  11. mir

    debian etch kvm no longer works

    If the disk is in raw format you could use a loop mount and for qcow2 you must use qemu-nbd: see https://www.linuxunbound.com/2016/07/mounting-raw-and-qcow2-images/
  12. mir

    X11SCL-LN4F OK for Proxmox

    I did not recommend that CPU it was just used to emphasize my point ;-). For 12 VM's I would go for a cpu with at least 6 cores.
  13. mir

    X11SCL-LN4F OK for Proxmox

    Depended on usecase the usual recommendation is that the number of cores is more important than the max clock speed of the cores. This is the CPU used at DigitalOcean: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz. 12 cores and 24 threads. Another option to take into consideration is TDP.
  14. mir

    Omnios ZFS/COMSTAR issue with Proxmox

    If OP uses one of the supported LTS versions TRIM is not disabled so if that is the case then it must be another problem.
  15. mir

    PVE 6.1 hard freezing with BTRFS scrub

    Or stop using btrfs which is unsupported in proxmox. If you insists on using btrfs avoid its raid5 or raid6 as the plague since it is unstable and has been for years. Read more here: https://forum.proxmox.com/threads/proxmox-with-zfs-or-btrfs.50962/
  16. mir

    Memory unit size in GUI

    Are you referring to this line? Memory: 3960588K/4193764K available (14339K kernel code, 2370K rwdata, 4684K rodata, 2660K init, 5076K bss, 233176K reserved, 0K cma-reserved) Looking at that line I cannot make that be 4 GiB? Using free with default (free) shows total memory as 4030596 KiB...
  17. mir

    Memory unit size in GUI

    Hi all, According to the GUI memory sizes should be in GiB (GibiBytes) but seems to actually be in GB (GigaBytes)? It is correct for disks (GiB as in GibiBytes) pve manager: 5.4-13 I have and example: /etc/pve/qemu-server/128.conf agent: 1 balloon: 2048 bootdisk: scsi0 cores: 2 cpu...
  18. mir

    openvswitch vs linux bridge performance

    No, nothing in particular just a general observation.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!