I fixed the issue for me:
Added this line in the .conf file of the container
lxc.mount.auto: cgroup:rw
lxc.mount.auto: sys:rw
@Pedulla maybe you can try this. Would like to hear if this solves your problem too.
However I'm not sure about any implications this has. With the previous version...
@Pedulla Were you able to fix this? I'm running into the same issue after upgrading from 6.4 to 7.1 last weekend.
Some of my containers with snap (like AdGuard) now have problems too. They will not start and run into file permission problems.
@oguz Might this be related to the Proxmox version...
After upgrading/ changing the hard drives, the issue persisted.
With the ZFS version zfs-2.0.4-pve1 the issues disappeared.
I made the same tests over the past few days. Large file transfers are now back in the 100 MByte/s area and the direct write tests from SSD ZFS to the HDD ZFS work now...
Thanks for your question.
Last scrub was on 10th of January. And it was successful without any issues needed to be repaired.
A different slot will be difficult, as I use the onboard SATA ports. ;-) Wating for the controller to arrive.
Cheers,
Chris
Hi,
no worries. I'm glad about every help I can get. I have to apologize for my late response. Sometimes other things are more import than computers and I needed some time to test.
I tested the suggested settings. I think it eased it a little bit, but no significant improvement.
I ordered now...
One additional information:
In the Syslog there are in more or less irregular periods the following messages:
Jan 20 20:26:17 proxmox02 kernel: Call Trace:
Jan 20 20:26:17 proxmox02 kernel: __schedule+0x2e6/0x6f0
Jan 20 20:26:17 proxmox02 kernel: schedule+0x33/0xa0
Jan 20 20:26:17 proxmox02...
Hi,
here is the result for zpool list -v:
root@proxmox02:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
hdd_zfs_guests 7.25T 4.24T 3.01T - - 25% 58% 1.19x ONLINE -
mirror...
Thanks for your help.
I will do the test tomorrow.
Just one remark: As you can see in my previous post - I first created the random file and after creation I transferred it from the SSD pool to the HDD pool. So even if /dev/urandom would be a bottleneck, it will not affect the copy speed...
No those spikes are roughly in the area of 30 - 50 seconds.
Edit: To be correct -> The high write rate is around 10 seconds. The drops are in the 30 - 50 seconds area.
Hello Fabian,
thanks for the reply. Did some testing over the weekend. In trusting the backup I reverted to 0.8.3.
Transfer rate increased massive. The dips still occur, but they only go down to round about 40 Mbyte/s. Which is slower than before the upgrade to 6.3, but much better with 0.8.5...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.