Search results

  1. A

    Proxmox hang when stop/shutdown lxc-container after update host

    I thought that this was related to privileged containers, but later I caught similar freezes on unprivileged containers too. What was common in all cases was a preliminary host update: Start-Date: 2024-03-02 14:47:02 Commandline: apt dist-upgrade Install: pve-kernel-5.15.143-1-pve:amd64...
  2. A

    Proxmox hang when stop/shutdown lxc-container after update host

    I have repeatedly come across a situation in which updating the host (kernel and some other components) leads to the fact that when the container is stopped, the host freezes. Is it possible to solve this problem in some way, in addition to the method of first stopping all containers before...
  3. A

    DiskIO in CT missing

    May be next python script will helps proxmox developers: - https://github.com/jimsalterjrs/ioztat - it just parse /proc/spl/kstat/zfs/<pool>/objset-* (on zfs 0.8): # ioztat -xS -s operations operations throughput opsize dataset read write...
  4. A

    Bump LXC 5.0

    lxc-pve/stable 5.0.0-3 amd64 [upgradable from: 4.0.12-1] Cheers!
  5. A

    DiskIO in CT missing

    People: 1) diskIO stat is present in container if it is unprivileged. 2) for privileged container diskIO stat is not present.
  6. A

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    I don't know what the norm should be, I can only suggest. But performance has doubled and allows the use of CEPH in prod.
  7. A

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    Thanks, I know about this document and I don't find NUMA architecture in EPYC (Zen 2) unpleasant, on the contrary - a lot of customization options make it flexible. About Ceph, current bench show next (also not any tunings, just waited few days): # rados bench -p bench 30 -t 256 -b 1024 write...
  8. A

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    @FXKai # numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 128 129 130 131 132 133 134 135 136 137 138...
  9. A

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    @FXKai thx! Our network stack: Cisco Nexus 3172 (with n9k firmware) with Intel 82599ES CPU NUMA on 2xAMD EPYC 7702 64-Core Processor: NUMA node0 CPU(s): 0-63,128-191 NUMA node1 CPU(s): 64-127,192-255
  10. A

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    Hi FXKai! Can you explain next, please: 1) which switch using in your cluster? 2) mellanox drivers from pve or installed expetialy? 3) do you use dpdk?
  11. A

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    I have same problem with: # ceph --version ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable) 8 nodes (CPU 2xEPYC 64 cores/RAM 2TB/Eth 2x10Gbit/s LACP), fresh install pve 7.1 2 nvme SSDPE2KE076T8 7.68TB per node used for CEPH, each nvme device splitted on 4 pices...
  12. A

    Network down after updates openvswitch

    Confirm update from # pveversion pve-manager/7.1-10/6ddebafe (running kernel: 5.13.19-6-pve) to # pveversion pve-manager/7.1-12/b3c09de3 (running kernel: 5.13.19-6-pve) after this message: ovs-vswitchd.service is a disabled or a static unit not running, not starting it. network is disable...
  13. A

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    I tune zfs for the next performance: FIO param size=2G # fio --time_based --name=benchmark --size=2G --runtime=30 --filename=/mnt/zfs/g-fio.test --ioengine=libaio --randrepeat=0 --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=4 --rw=randwrite --blocksize=4k...
  14. A

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    Also for comparsion XFS (direct) IO: # mount | grep xfs | grep test /dev/nvme6n1p1 on /mnt/test1 type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota) # fio --time_based --name=benchmark --size=15G --runtime=30 --filename=/mnt/test1/test.file --ioengine=libaio --randrepeat=0...
  15. A

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    Only increase zfs_dirty_data_max (4294967296 -> 10737418240 -> 21474836480 -> 42949672960) compensate performance penalties, but this is background record same slow per nvme devices ~10k iops per device: # fio --time_based --name=benchmark --size=15G --runtime=30 --filename=/mnt/zfs/g-fio.test...
  16. A

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    With ashift=12 same - when FIO param "size" > 2G with "bs"=4k - performance of ZFS is dropdown: # fio --time_based --name=benchmark --size=1800M --runtime=30 --filename=/mnt/zfs/g-fio.test --ioengine=libaio --randrepeat=0 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0...
  17. A

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    AND: man fio: I/O size size=int The total size of file I/O for each thread of this job. Fio will run until this many bytes has been transferred, unless runtime is limited by other options (such as runtime, for in‐ stance, or increased/decreased by...
  18. A

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    FYI (about poor performance ZFS with 4k): ZFS (NVME SSD x4 in RAIDZ1 and 1 NVME SSD for LOG): # zpool get all | egrep 'ashift|trim' zfs-p1 ashift 13 local zfs-p1 autotrim on local # zfs get...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!