yes
https://bugzilla.kernel.org/show_bug.cgi?id=199727
"i tried "virtio scsi single" with "aio=threads" and "iothread=1" in proxmox, and after that, even with totally heavy read/write io inside 2 VMs (located on the same spinning hdd on top of zfs lz4 + zstd dataset and qcow) and severe write...
also bmc bei 73 grad und 2 lüfter bei 10 und einer bei 30% dürfte eigentlich kein grund zur beschwerde sein. 09-BMC finde ich da auch schon etwas warm....
also ich hab da bei meinem hp dl380 gen7 mit alten lsi karten auch ne weile tüfteln müssen, contrioller reihenfolge alleine reicht scheinbar nicht, es muss auch die bootfähige platte im richtigen plattenslot stecken
> Aus unserer täglichen CAD-Tätigkeit wissen wir, dass eine durchschnittliche Gaming-Grafikkarte völlig ausreicht
reicht da nicht in den meisten fällen schon eine nvidia quadro K620 ?
what do you mean with "are authorized" ? hba mode sucks with these controllers. they are not made for this and even if this may work initially, they miserably fail on disk error conditions. getting controller freezes, stuck zpool.... whatever.
fleecing works good for me, from a first point of view.
but do i see this right that it is only availble when doing backup via backupjob , but not when doing backup manually via "backup now" option in vm configuration ?
guess this is planned to be added later on ?
EDIT:
this has nothing to do with 8.2 update, there was some change for alias handling last year and apparently settings getting changed if you edit and re select via dropdown box. so it's not an issue but a feature. see...
i did update my non-production test-system.
i want to try new fleecing feature.
what is the reason that fleecing storage "local-zfs" is selected by default and greyed out, so no other fleecing storage can be selected ?
i got another VM with this error
qemu-img: Could not delete snapshot 'vor_os_update': Failed to free the cluster and L1 table: Invalid argument
TASK ERROR: command '/usr/bin/qemu-img snapshot -d vor_os_update /rpool/vms-files-zstd/images/106/vm-106-disk-0.qcow2' failed: exit code 1
> https://forum.proxmox.com/threads/opt-in-linux-6-8-kernel-for-proxmox-ve-8-available-on-test-no-subscription.144557/post-651341
the reported performance differences of docker inside lxc here is one more reason for me to to NOT use docker inside LXC or recommend doing so.
hello,
does anybody know some good site which is benchmarking ssd's for comparison and which provides reliable and good information regarding sustained write capability ?
for many many ssd, especially the cheaper ones, sustained write really sucks ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.