Search results

  1. mir

    ZFS over iSCSI Network HA

    Since LACP is a pure network thing and completely unrelated to iSCSI it is working 100%.
  2. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    No, same error: lvchange -ay qnap/vm-153-disk-0 device-mapper: create ioctl on qnap-vm--153--disk--0 LVM-RCevIXI8i5huDYro1QZ0fdlcZWWqYxm7DIe1JfkZcJK5iskK4TCa7rX8g5Kvwi3c failed: Device or resource busy
  3. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    This specific setup has been working unchanged since Proxmox 1.9. Did you test with an LVM marked 'Shared'? esx1:~# lvs |grep qnap vm-153-disk-0 qnap -wi------- 8.00g esx2:~# lvs |grep qnap vm-153-disk-0 qnap -wi-ao---- 8.00g esx1:~# lvdisplay qnap --- Logical volume --- LV Path...
  4. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    Hi all, The previous upgrade seems to have broken online and offline migration with shared lvm storage on iscsi. This is the upgrade: Start-Date: 2020-02-11 01:24:26 Commandline: apt upgrade Requested-By: mir (1000) Install: pve-kernel-5.3.18-1-pve:amd64 (5.3.18-1, automatic) Upgrade...
  5. mir

    proxmox 5.4 to 6.1

    I was following these instructions and it was also these instructions I was referring to.
  6. mir

    proxmox 5.4 to 6.1

    Hi all, Today I took the plunge and upgraded my cluster from 5.4 to 6.1 using apt update, apt dist-upgrade. The upgrade in itself was painless however on small annoyance when upgrading the corosync qdevice package as explained here...
  7. mir

    ZFS over iSCSI on Synology

    The disk format must be Qcow2 to be able to take snapshots and you have probably chosen raw.
  8. mir

    Zfs over iscsi plugin

    Only if you install and configure multipathd
  9. mir

    debian etch kvm no longer works

    If the disk is in raw format you could use a loop mount and for qcow2 you must use qemu-nbd: see https://www.linuxunbound.com/2016/07/mounting-raw-and-qcow2-images/
  10. mir

    X11SCL-LN4F OK for Proxmox

    I did not recommend that CPU it was just used to emphasize my point ;-). For 12 VM's I would go for a cpu with at least 6 cores.
  11. mir

    X11SCL-LN4F OK for Proxmox

    Depended on usecase the usual recommendation is that the number of cores is more important than the max clock speed of the cores. This is the CPU used at DigitalOcean: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz. 12 cores and 24 threads. Another option to take into consideration is TDP.
  12. mir

    Omnios ZFS/COMSTAR issue with Proxmox

    If OP uses one of the supported LTS versions TRIM is not disabled so if that is the case then it must be another problem.
  13. mir

    PVE 6.1 hard freezing with BTRFS scrub

    Or stop using btrfs which is unsupported in proxmox. If you insists on using btrfs avoid its raid5 or raid6 as the plague since it is unstable and has been for years. Read more here: https://forum.proxmox.com/threads/proxmox-with-zfs-or-btrfs.50962/
  14. mir

    Memory unit size in GUI

    Are you referring to this line? Memory: 3960588K/4193764K available (14339K kernel code, 2370K rwdata, 4684K rodata, 2660K init, 5076K bss, 233176K reserved, 0K cma-reserved) Looking at that line I cannot make that be 4 GiB? Using free with default (free) shows total memory as 4030596 KiB...
  15. mir

    Memory unit size in GUI

    Hi all, According to the GUI memory sizes should be in GiB (GibiBytes) but seems to actually be in GB (GigaBytes)? It is correct for disks (GiB as in GibiBytes) pve manager: 5.4-13 I have and example: /etc/pve/qemu-server/128.conf agent: 1 balloon: 2048 bootdisk: scsi0 cores: 2 cpu...
  16. mir

    openvswitch vs linux bridge performance

    No, nothing in particular just a general observation.
  17. mir

    ZFS bad Performance!

    If you want performance as in IO the only way to go is using RAID 10 (striped mirrors). More stripes means higher performance. The explanation given here is very god: https://www.youtube.com/watch?v=GuUh3bkzaKE
  18. mir

    openvswitch vs linux bridge performance

    My impression is also that linux bridge is the stable and simple choice but with fewer features than ovs. Whether linux bridge or ovs is simpler to configure is a matter of taste ;-) Looking forward to see the result of your test. A comparison of cpu usage order load could be interesting too?
  19. mir

    openvswitch vs linux bridge performance

    Anybody here aware of performance comparison tests made lately between openvswitch and linux bridge?