Recent content by zima

  1. Z

    spurious kernel messages since upgrade to 7.2

    Hi, exactly the same problem, identical servers, those with BCM57412 NetXtreme-E 10Gb have problem, but those with intel cards seems ok. in logs on all nodes problem starts with Oct 5 14:11:45 kernel: [135939.389881] unchecked MSR access error: WRMSR to 0x19c (tried to write...
  2. Z

    How to delete VM/CT RDD data?

    In this folder you have all rdd files from cluster so no need to connect to all nodes to delete files. I don't think there is api way for this - such functionality would be used when you delete vm, but instead it's rrd file live there for ever.
  3. Z

    How to delete VM/CT RDD data?

    rrd files are in /var/lib/rrdcached/db/pve2-vm
  4. Z

    [SOLVED] Windows server 2019

    it was just quick install test to check if it will detect the disk - only this version of old virtio iso was on the machine - by saying 'it works fine' i meant 'it detected disk' even with 134 version
  5. Z

    [SOLVED] Windows server 2019

    works fine with virtio-win-0.1.134
  6. Z

    LVM-thin to directory

    If i understand corectlly this is what you want (selecting disk image and container)): https://forum.proxmox.com/threads/no-dev-mapper-pve-data-after-ve-5-0-installation.36324/#post-178138
  7. Z

    Proxmox Cluster Broken almost every day

    Looks like issue with multicast/igmp on switch/router.
  8. Z

    clear unused space on thin-lvm

    You issue trim to free space on lvm-thin regardless of storage disk type spinner or ssd. On windows vm to force trim run in powershell: Optimize-Volume -DriveLetter C -ReTrim -Verbose
  9. Z

    No /dev/mapper/pve-data after VE 5.0 installation

    create volume: lvcreate -V100G -T pve/data --name XXX mkfs.ext4 /dev/pve/XXX mount it ( and add it to fstab ) under dir i.e. /templates then add it in storage as type directory and select templates,iso and backup
  10. Z

    High IO latency - low FSYNCS rate

    I would power off server and reconnect battery again - if this not help maybe battery is bad. last idea - you can try to force learn cycle: megacli -AdpBbuCmd -BbuLearn -aALL -NoLog
  11. Z

    High IO latency - low FSYNCS rate

    full bbu info: megacli -AdpBbuCmd -aAll -NoLog to force writeback: megacli -LDSetProp -ForcedWB -Immediate -Lall -aAll
  12. Z

    High IO latency - low FSYNCS rate

    probably writeback disabled get megacli from: http://hwraid.le-vert.net/wiki/DebianPackages get info with: megacli -LDInfo -LAll -aAll and look for Current Cache Policy to enable writeback on the fly with: megacli -LDSetProp WB -LALL -aALL -NoLog
  13. Z

    [SOLVED] Ceph on 5.0 as storage for 4.4

    Installing jewel do the trick, 4.4 is connecting to ceph again. Thank you.
  14. Z

    [SOLVED] Ceph on 5.0 as storage for 4.4

    client 4.4 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3) on 5.0 ceph version 12.0.3 (26cbb7ec2e7864aaa43528631331338f1ea55775)
  15. Z

    [SOLVED] Ceph on 5.0 as storage for 4.4

    After yesterday's upgrade of 5.0 to: proxmox-ve: 5.0-12 (running kernel: 4.10.15-1-pve) pve-manager: 5.0-10 (running version: 5.0-10/0d270679) pve-kernel-4.10.15-1-pve: 4.10.15-12 pve-kernel-4.10.11-1-pve: 4.10.11-9 libpve-http-server-perl: 2.0-4 lvm2: 2.02.168-pve2 corosync: 2.4.2-pve3 libqb0...