Search results

  1. E

    [PVE6] ZFS issue (freeze i/o r/w)

    Yes, they have the latest firmware. # fio --size=20G --bs=4k --rw=write --direct=1 --sync=1 --runtime=60 --group_reporting --name=test --ramp_time=5s --filename=/dev/sda test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 fio-3.12 Starting 1...
  2. E

    [PVE6] ZFS issue (freeze i/o r/w)

    smartctl output for sda device. smartctl -a /dev/sda smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.3.18-2-pve] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: Seagate BarraCuda SSD ZA2000CM10002...
  3. E

    [PVE6] ZFS issue (freeze i/o r/w)

    I think it is a bug in the firmware. Every 30 minutes send that message and the ssd are not hot. Apr 10 12:06:33 pve-us smartd[4841]: Device: /dev/sda [SAT], SMART Prefailure Attribute: 194 Temperature_Celsius changed from 95 to 97 Apr 10 12:06:33 pve-us smartd[4841]: Device: /dev/sdb [SAT]...
  4. E

    [PVE6] ZFS issue (freeze i/o r/w)

    Syslog fragment: tail -300 /var/log/syslog | grep -v 'VE replication' Apr 10 12:41:01 pve-us systemd[1]: pvesr.service: Succeeded. Apr 10 12:42:01 pve-us systemd[1]: pvesr.service: Succeeded. Apr 10 12:43:01 pve-us systemd[1]: pvesr.service: Succeeded. Apr 10 12:44:01 pve-us systemd[1]...
  5. E

    [PVE6] ZFS issue (freeze i/o r/w)

    Nothing. In the same backplane. MB: Supermicro SYS-1028U-TNRT+/X10DRU-i+
  6. E

    [PVE6] ZFS issue (freeze i/o r/w)

    It does not generate a log of ANYTHING.
  7. E

    [PVE6] ZFS issue (freeze i/o r/w)

    My install have 2 CPU 2630 v4 and 256 GB of RAM. Not use SWAP.
  8. E

    [PVE6] ZFS issue (freeze i/o r/w)

    Hi, In my installation PVE-6, I have an issue with the ZFS raid 1. At times it stops processing on the disks. There is no read/write (i/o) and the VMs are frozen. The issue is present with the sda, sdb, sdc and sdd disks. All disks are Seagate Barracuda ZA2000CM10002. ZPOOL output: # zpool...
  9. E

    [ZFS] SSD High I/O (Standalone Installation)

    zfs_vdev_aggregate_trim 0 zfs_vdev_aggregation_limit 1048576 zfs_vdev_aggregation_limit_non_rotating 131072 zfs_vdev_async_read_max_active 3 zfs_vdev_async_read_min_active...
  10. E

    [ZFS] SSD High I/O (Standalone Installation)

    zfs_abd_scatter_enabled 1 zfs_abd_scatter_max_order 10 zfs_abd_scatter_min_size 1536 zfs_admin_snapshot 0 zfs_arc_average_blocksize...
  11. E

    [ZFS] SSD High I/O (Standalone Installation)

    Tunables: dbuf_cache_hiwater_pct 10 dbuf_cache_lowater_pct 10 dbuf_cache_max_bytes 1073741824 dbuf_cache_shift 5...
  12. E

    [ZFS] SSD High I/O (Standalone Installation)

    ZFS Subsystem Report Wed Dec 25 23:15:04 2019 Linux 5.0.21-1-pve 0.8.1-pve2 Machine: pve-us (x86_64) 0.8.1-pve2 ARC status: HEALTHY...
  13. E

    [ZFS] SSD High I/O (Standalone Installation)

    zfs list ssd-zfs ssd-zfs 696G 4.43T 232K /ssd-zfs ssd-zfs/subvol-100-disk-0 1.08G 28.9G 1.08G /ssd-zfs/subvol-100-disk-0 ssd-zfs/subvol-102-disk-0 931M 7.09G 931M /ssd-zfs/subvol-102-disk-0 ssd-zfs/subvol-105-disk-0 3.22G 16.8G 3.22G...
  14. E

    [ZFS] SSD High I/O (Standalone Installation)

    Hi, I have a Proxmox standalone installation and have high I/O when working in any instance running in raid ssd. proxmox-ve: 6.1-2 (running kernel: 5.0.21-1-pve) pve-manager: 6.1-3 (running version: 6.1-3/37248ce6) pve-kernel-5.3: 6.0-12 pve-kernel-helper: 6.0-12 pve-kernel-5.0: 6.0-11...
  15. E

    Disks files map (Host<->Guest)

    Hi, I need to know what file (qcow2/raw) corresponds to each vdX on guest. In this example, vdf is virtio3 (vm-141-disk-8) because I know it: Proxmox host Guest I can not find any tool that indicates which unit corresponds to each file.
  16. E

    [SOLVED] PVE 5.2 - W2K3 R2 x64 idle 100% CPU

    SOLVED: S.O. infected with virus minero (lsmose.exe).
  17. E

    [SOLVED] PVE 5.2 - W2K3 R2 x64 idle 100% CPU

    Last week I migrated a VM from XEN to Proxmox. When the VM is idle for two hours, the CPU consumption goes up to 100%. Login and return to normal for a while. strace for 7 minutes. I have two Proxmox 5.2 (one intel and another amd) and in both the same thing happens. Same behavior. I have...
  18. E

    Can't restore from PVE 4 to PVE 5

    Create snapshot on PVE 4 to QEMU instance and copy by scp to PVE 5 (with zfs storage with raid 1) and can't restore snapshot. pveversion -v proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve) pve-manager: 5.1-35 (running version: 5.1-35/722cc488) pve-kernel-4.13.4-1-pve: 4.13.4-25...