Further to my previous post, if I create a zfs filesystem on the root pool and mount it to a privileged container the performance is within a margin of error to the host.
There seems to be some issue with the way the lxc container is treating any mounted storage created by the CT wizard.
I've been doing some further testing on the new Proxmox server I've spun up. All tests still performed with mitigations=off.
I created a Debian 11 VM with the following settings:
balloon: 0
boot: order=scsi0;ide2;net0
cores: 4
ide2: local:iso/debian-11.2.0-amd64-netinst.iso,media=cdrom
memory...
Yep, fio-3.25. Updated tests using io_uring are below. If you think of anything, please let me know. I'm not sure where I should be looking to troubleshoot things further. It's positive at least knowing that a fresh server still shows the same problem as my existing, which means the problem is...
Hi @t.lamprecht,
Thank you for your reply.
Correct, all of the originals were privileged.
In order to perform some of this testing I've fired up another Proxmox server running 7.1-7 with less storage. 2 mirrors, one with 2x 3TB the other with 2x 1TB.
The problem still remains though. I...
Hi All,
Hoping to gain a better understanding of why I'm seeing a difference in performance running the same fio command in different contexts.
A little background, my pool originally consisted contained 4 vdevs, each with 2x 3TB. However due to a failing disk on mirror-3 and wanting to...
Thanks @sahostking and @scaa for adding your reports of the same error. It would be good to find out what is causing these errors and if there is anything that can be done about them from someone in the community or the Proxmox devs themselves tagging @tom @fabian.
The node seems to be...
Hi folks,
Just after a sanity check here. I recently upgraded my Proxmox 5.x server to 5.1 and noticed the following errors on boot.
[ 0.000000] [Firmware Bug]: TSC_DEADLINE disabled due to Errata; please update microcode to uersion: 0x3a (or later)
Reading all physical volumes. This may...
Great, thanks @fabian. Out of all the machines I have, this one with samba is the only one with an issue, wouldn't be so bad if it didn't cause everything to freeze.
Hopefully armed with the information you now have, it can be isolated and fixed. If there is anything else you require from me...
@fabian: I tried your suggestion.
1. Start CT.
2. Verify share is mounted.
3. Unmount share.
4. Verify share is unmounted.
5. Shutdown
No issues as @ohmer noted. I thought the issue might still occur on startup but it seems once it has shutdown with the share unmounted it boots without error...
Thank you for your reply @fabian. I am now better able to provide you the information you require to further assist with this particular problem. If there is anything I have missed or incorrect or would like further information on, please don't hesitate to let me know.
1. Output of pveversion...
Anyone? @wolfgang, @tom, @dietmar, @martin, @fabian. Surely @ohmer and myself aren't the only ones facing this issue.
If I could bring your attention to this post https://forum.proxmox.com/threads/unable-to-stop-a-container-waiting-for-lo-to-become-free.3510/
Created back in 2010 from @Smanux...
There are multiple reports on the web about this.
1. https://github.com/docker/docker/issues/5618
2. https://bugzilla.redhat.com/show_bug.cgi?id=880394#c13
3. https://bugzilla.kernel.org/show_bug.cgi?id=81211
4. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1403152
So this isn't...
As this problem doesn't seem to be getting any type of feedback it must mean myself and @ohmer are the only ones with an issue. But regardless I decided to go ahead and record a video of the problem in action to hopefully assist with resolving this problem.
It isn't isolated to my server either...
UPDATE: Fixed.
If you find yourself in a similar situation uncomment ZED_EMAIL_PROG and leave it as mail and lastly uncomment ZED_EMAIL_OPTS.
After performing those two tweaks I am now able to receive an email notification every time zfs scrubs. If you only wish to be notified on errors set...
I have decided to automate the scrubbing of my pool with the advice provided by @tom from this post ZFS Health and because it is no good scrubbing if you aren't notified of a problem looked in the suggestion at the end of that post about using /etc/zfs/zed.d/zed.rc.
Information about this ZFS...
Interesting. Thanks for adding your experience @ohmer. As I said in my first update once I removed that troublesome container the problem seemed to go away and it was only today when it came back however it was only for a very small amount of time and it probably produced the error 3 times in 20...
Excellent, thanks for the detailed answer, provided the confirmation I was looking for. It is easy enough to run "zfs mount -a" so that is the course of action I will take.
Hopefully this information will also assist others in a similar situation.
I am not sure if I have understood your question, but please see below for a screenshot of the moment the error occurs on boot and the output of df -h.
df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 401M 5.8M 395M 2% /run...
Answers to your questions are below.
1. Correct. My SSD is where Proxmox is installed to. All templates, storage and containers are kept on the zfs pool.
2. I installed a fresh copy of Proxmox on my VirtualBox test environment, output below of pveversion -v.
proxmox-ve: 4.2-60 (running...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.