I fixed the issue for me:
Added this line in the .conf file of the container
lxc.mount.auto: cgroup:rw
lxc.mount.auto: sys:rw
@Pedulla maybe you can try this. Would like to hear if this solves your problem too.
However I'm not sure about any implications this has. With the previous version...
@Pedulla Were you able to fix this? I'm running into the same issue after upgrading from 6.4 to 7.1 last weekend.
Some of my containers with snap (like AdGuard) now have problems too. They will not start and run into file permission problems.
@oguz Might this be related to the Proxmox version...
After upgrading/ changing the hard drives, the issue persisted.
With the ZFS version zfs-2.0.4-pve1 the issues disappeared.
I made the same tests over the past few days. Large file transfers are now back in the 100 MByte/s area and the direct write tests from SSD ZFS to the HDD ZFS work now...
Thanks for your question.
Last scrub was on 10th of January. And it was successful without any issues needed to be repaired.
A different slot will be difficult, as I use the onboard SATA ports. ;-) Wating for the controller to arrive.
Cheers,
Chris
Hi,
no worries. I'm glad about every help I can get. I have to apologize for my late response. Sometimes other things are more import than computers and I needed some time to test.
I tested the suggested settings. I think it eased it a little bit, but no significant improvement.
I ordered now...
One additional information:
In the Syslog there are in more or less irregular periods the following messages:
Jan 20 20:26:17 proxmox02 kernel: Call Trace:
Jan 20 20:26:17 proxmox02 kernel: __schedule+0x2e6/0x6f0
Jan 20 20:26:17 proxmox02 kernel: schedule+0x33/0xa0
Jan 20 20:26:17 proxmox02...
Hi,
here is the result for zpool list -v:
root@proxmox02:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
hdd_zfs_guests 7.25T 4.24T 3.01T - - 25% 58% 1.19x ONLINE -
mirror...
Thanks for your help.
I will do the test tomorrow.
Just one remark: As you can see in my previous post - I first created the random file and after creation I transferred it from the SSD pool to the HDD pool. So even if /dev/urandom would be a bottleneck, it will not affect the copy speed...
No those spikes are roughly in the area of 30 - 50 seconds.
Edit: To be correct -> The high write rate is around 10 seconds. The drops are in the 30 - 50 seconds area.
Hello Fabian,
thanks for the reply. Did some testing over the weekend. In trusting the backup I reverted to 0.8.3.
Transfer rate increased massive. The dips still occur, but they only go down to round about 40 Mbyte/s. Which is slower than before the upgrade to 6.3, but much better with 0.8.5...
Thanks for the numbers. Your SSDs also seem to have a low performance FSYNC wise. Even for a consumer SSD. Interesting.
However I was more thinking about transferring data from the SSD Pool to the HDD Pool like I did here, by first creating a large random file on the SSD pool and then transfer...
Hi,
this sounds a little bit like the problem I have since I upgraded Proxmox.
https://forum.proxmox.com/threads/zfs-on-hdd-massive-performance-drop-after-update-from-proxmox-6-2-to-6-3.81820/
I see the behavior mainly when writing files, but I also see a massive performance hit when reading...
OK. In the meantime I did a first test with sync=disabled. Same effect. The HDDs are painfully slow on writes.
The FSYNCS increase dramatically, but the issue with writes is still the same.
This is driving me nuts. I had a perfect working system this long.
Just one question regarding SLOG:
When setting sync=disabled for testing purposes, I should see the same performance increase like when using an SLOG device. Right?
I know, that disabling sync is in general a bad idea, but as an easy and quick and dirty tests, this is ok.
Thanks!
Christian
Thanks for your answer.
Yes it looks like a cache saturation, but the system was running for 18 months without the issue.
When I was building the system I made some excessive tests. And I always could reach at least 130 MByte/s write speed to the HDDs.
So I always could saturate 1 gigabit...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.