>Das ist doch gar nicht der Punkt. Ich finde es völlig legitim, docker in lxcs zu benutzen, wenn man sich der Problematik bewusst ist und den
>damit verbundenen Folgen (Ausfall von Diensten, troubleshooting etc) leben kann. Mein Eindruck ist...
Also meine VM mit den Docker Containern ist gar nicht so verschwenderisch auf dem kleinen Atom Prozessor. Wenn der kleine Kernel zu viel ist, dann ist die Kiste etwas knapp bemessen. Warum soll ich eigentlich nur inkrementelle Backups vom LXC...
i don't see performance regression . it's performance disadvantages of qcow2 on top of zfs are greatly exaggerated, imho.
qcow2:
# dd if=/dev/zero of=/dev/sdb bs=1024k status=progress count=10240
9954131968 bytes (10 GB, 9.3 GiB) copied, 12...
this is not meant to to make users/customers uncertain about product quality , but whoever is deeper into using qcow2 may have a look at https://bugzilla.proxmox.com/show_bug.cgi?id=7012 , it's about a case where qcow2 performs pathologically...
Just an update. This was narrowed down to the Nimble box not supporting some newer functions.
BLKZEROOUT to be exact.
Disabling detect-zeroes=on is our work around until Nimble provides the permanent fix. This issue is solved!
Proxmox...
Hi,
a large part of that is feature requests and that is not a bad thing in itself. Many things are easy to request, but a lot of effort to actually implement. There's a wide variety of uses cases and wishes.
If they really are outdated, feel...
so i disabled ( removed) net 1 from broken server
and the problem is gone, :)
no more hiccups ... performance is a bit on the low side, but no full hiccups here ,
so i reinstated net 1 , and the problem returned
this is how it looks when...
i would first disable net1 temporarly in the old and then provide 4 cores and 2048mb to the new lxc
comparing two different operating systems is hard enough - but they should have the same lxc/hardware ressources
i guess the most prominent...
you mean you have a second lxc container which works without problems?
those are both turnkey linux lxc containers, or what?
the one with the issue has docker inside?
we had a ticket for this at https://bugzilla.proxmox.com/show_bug.cgi?id=3118, which i just closed.
if anybody still sees this with any recent linux distro, please report. (i guess this exists, as bugs constantly (re)appear in linux) ;)
I think it's just a historic accident that they are both the same. Each technology might find 80% a logical default. On recent PVE versions, you can also change the ballooning threshold. When using ZFS where the cache memory counts as used...
And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from...
für zukünftige "lessons learned" sowie das damit verbundene "copy&paste" um es für andere leichter rüberzubringen:
Es ist ein bekanntes Problem, dass bei größeren Updates docker-Container in LXC-Containern immer mal wieder kaputt gehen. Beide...
@floh8 , i agree that btrfs my be faster then zfs with small file / metadata access, but i would not consider that worth trying until proxmox verfiy didn't get optimization for better parallelism. you won't change your large xx terabytes...
@triggad , do you use the latest version of PBS ?
there has been at least this enhancement which can increase gc speed a lot
https://git.proxmox.com/?p=proxmox-backup.git;a=commit;h=03143eee0a59cf319be0052e139f7e20e124d572
Total GC runtime...
could you post your lxc container configuration, your samba configuration from inside container and your storage.cfg from /etc/pve and the output of the "mount" command ?