Search results

  1. A

    Erro installProxmox R710 dell

    Had a few free time to test) I have same problem with only raid configured. So i used separate ssd for system and 4hdd with raid 10
  2. A

    Erro installProxmox R710 dell

    Do you use UEFI or legacy boot? I have similar with legacy boot. Also, you wrote stick in ISO or dd mode in rufus? P.s. The best choice is to use dd mode in rufus and UEFI in BIOS
  3. A

    [SOLVED] Samsung 870 QVO 1TB Terrible Write Performance

    Thank for your reply. I found similar answers about caching. Yes, if i create VM on 870qvo and use it with write-back cache the performance is very high. But if i migrate VM to this ssd the performance is very slow after 20-40Gb of transfer. But after about 20mins the performance of working...
  4. A

    [SOLVED] Samsung 870 QVO 1TB Terrible Write Performance

    On windows tests was simple. Copy/paste, move from other ssd, crystaldismmark with changed options like bs=4k (bs=128k), queues=1,3,8,filesize = 30Gb, 20k files (in folder) by 3-8k each. On linux (tried on PVE, debian 10,ubuntu 18.04) dd, fio (like upper in posts) - same results approx 2-5Mb...
  5. A

    [SOLVED] Samsung 870 QVO 1TB Terrible Write Performance

    Hello. I faced the similar problem with 870 QVO 1Tb. I tested on LVM thin, directory configs... Proxmox 6.4 latest. Results: Sequential, random reads - 520-550Mbps Seuqential, random writes - 78-81Mbps. And there is no matter that is sequential or random... Tried this ssd on windows machine -...
  6. A

    live migration with discard enabled

    I tried with LVM, LVM-thin (the most i use) storage and simple directory (qcow2) No, with proxmox 7 i didn't try. I have this issue from old proxmox 5... I googled about high IO delay but there is a lot of threads but nothing about that...
  7. A

    live migration with discard enabled

    Hello. I have troubles with live migration if "discard=on" for VM. For some time it halts overall system with IO delay 90% and then migration starts normally. it seems that proxmox at first moving something (the whole disk size), i can see it by network monitor, and than move disk as usual. I...
  8. A

    LVM zeroing slows SSD

    Hello. Recently i replaced SSD drives on hosts and faced a problem. Veey slow performance in VMs. Lot of time i couldn't understand what happens. I used Crucial CX500, but bought Samsung 870 QVO (as in tests it is faster?) So, after some time i found (in experimental way) that with "discard"...
  9. A

    Cluster in different subnets

    Hello. i have working cluster with 2 nodes in 192.168.88.0/24 subnet. also i made eoip tunnel to other place and need to add a node with IP 192.168.88.105/25 all network configs was done. all IPs can see each other by tcp and udp. i mean that i can use ping, nmap -Su, tested with iperf3 tcp...
  10. A

    Rate limit strange

    without rate=20 with rate=20 this tests was done one by one. (in approx 5 minutes...) Real memory 223.95 MiB used / 728.79 MiB cached / 1.95 GiB total Virtual memory 0 bytes used / 1.99 GiB total CPU load averages 0.00 (1 min) 0.00 (5 mins) 0.00 (15 mins) AMD Ryzen 9 3950X...
  11. A

    Rate limit strange

    No. "test" VM is not rate limited. host actually too)
  12. A

    Rate limit strange

    Between VM and host or different VM even on other host in cluster - 8.8-9.4Gbps In fact i want to limit VM to 100mbps (approx 12MB/s?)
  13. A

    Rate limit strange

    also tried with E1000 and RTL8139. situation is the same. firewall or multiqueue has no effect. it is very strange that in\out traffic is limited in megabytes per sec but internal in megabits per sec... with this config - speedtest - 91mbps, iperf (with local VM (or host - 25mbps) without...
  14. A

    High IO on LVM

    Hello. I faced with problem that IO on LVM (or ZFS) storage is 99.9% tried to change dirty_ratio, dirty_background_ration ect but still no result. i use SSDs as usual 2Tb Crucial MX500, 1Tb Samsung EVO 870 on both the same problem... When i use directory - all is ok. Can somebody suggest...
  15. A

    Rate limit strange

    Hello. I faced the problem that speed between VMs is very slow. Almost all VMs have rate limit of 12 (approx 100mbps). speedtest from any VM shows 100mbps,but iperf3 between them shows exactly 12mbps (2,6MB/s) Maybe rate limit can have any other values?
  16. A

    kworker 99% IO

    The same problem. and to finish 32G VM restore - underlined took about 3.5hours... I tried: VM on LVMthin --> VM on LVM thin, VM on ZFS, VM on qcow VM on qcow --> VM on LVM thin, VM on ZFS, VM on qcow With Proxmox 6.3 there wasn't troubles added: Tested once more with different variants...
  17. A

    Storage lost

    About 3 weeks all works fine. I replaced power unit from defalt that was with case to separate. Somewhere i read a post that different SSDs NVMEs are very sensitive to power line. So, you need a good power supply unit (not powerful, but that filters many inrerferences) or a good UPS with "good"...
  18. A

    Storage lost

    Seems you are right. Bu very interesting. I have 2 hosts connected to one UPS. I reviewed situation with power, yes, there was switching to ups for a half a second. the whole machine is working, but only SSDs are felt it... Now i permanently disabled IOMMU in BIOS. will see more time. for few...
  19. A

    Storage lost

    Replace its a bit harder, but i have 3 hosts. There is different power supply units and CPUs (Ryz 7 3900X,Ryz 9 3950X, Ryz 9 5900) and on all of them the same problem. Hosts are not loaded high (avg cpu usage 10%, memory not higher than 60%).
  20. A

    Storage lost

    Trese is no any usb devices. That is a host with only power and network cables connected)))) I tried to disable iommu in bios. Still no effect. The worst thing that i cannot see what causes this.... It works fine, but in some moment it happens. And nothing in logs