Not to protect the proxmox people :D but while it is well known that OpenVZ was many ways superior to cgroups, it has been fallen seriously behind of the kernel development due to politics and various technical problems and proxmox is not really in the position to develop openvz or fix cgroups...
Just adding a different story:
Host was offline for a while, came up again, and mon was stuck at "handle_auth_request failed to assign global_id".
I went out reading, and when I was back (after 15 minutes) the mon was in again. It tried probing for about 10 minutes, then started synchronising...
2478 - open (kernel question)
2477 - packaging problem
2464 - apparmor config problem?
2457 - worksforme
2450 - amd gpu related, investigating
2448 - some dbus corder case, but no comment on it
2446 - unreproducible
2443 - dupe 1649, basically that iowait cannot be checkpointed, but they cannot...
I have directly responded to the post: "bug, hang, crash"
I agree, and I have tried to browse the issues but (for random sampling) found no relevant critical problem. Do you have any specific list of critical issues which are blocking the normal everyday use?
If you have a list of critical...
I wouldn't say it's easy to tell where are we now, or when the developers have checked the state of affairs last time, and what works and what not and what's blocking and where, and where is progress and what's stuck, et cetera...
I also remember, half a decade ago that someone have complained about various mount(ed) namespaces and that their freeze isn't well developed in the kernel, but since then a lot of years have came to pass, and it may work.
I probably shall play a little with criu and lxc and see what I get, but...
I understand that you don't think that.
Have you ever seen OpenVZ? It also have done the "impossible", and had a working checkpointing and freeze/thaw infrastructure.
Like: https://www.kernel.org/doc/Documentation/cgroup-v1/freezer-subsystem.txt
I guess you never heard of criu either?
It's been more than 2 years now since I last asked about the status report of online migration of containers.
Oh, and got no answer, so it's been 5 years since I have heard anything about this from you guys.
Would be nice to know that at least you keep in mind that we're right here waiting...
Maybe not just yet: https://www.phoronix.com/review/bcachefs-linux-67
(Apart from that it is way too young to be considered anyway. Maybe in 3-5 years.)
[...]
So I went out checking, and found:
(2024) it have gotten way faster [that's excellent];
(2024) it is not advised to SSD media due to extreme CoW wearout (write multiplication) [not excellent];
(2015) it is not advised for any load with small writes (databases, containers, ...) (but I...
Are they? A few years agot its performance was still rather questionable. I see that newer benchmarks are getting better results but haven't found any recent seriously looking comparison.
Okay, let's differentiate.
One case is when you upgrade / restart lxcfs and this cause the problem you mentioned. This is what I was talking about: in some cases you cannot safely stop and start lxcfs (I think it's related to various changes in fuse and cgroups but never checked) and it needs a...
Yes. You have to reboot the machine. It seems to mess up so many things that nobody ever took the efforts to try to untangle it. Basically major updates of lxcfs are the main reboot magnets in proxmox. That, and kernel updates.
For old systems (fstore) still may be relevant: `ceph-volume` is not a dependency just recommended, so it's possible that it's not getting installed (or gets removed) at the upgrade. Without it filestores do not get mounted, thus osd's will not be present and can't start.
Is it possible that 2.3.x server breaks 2.0.x client?
The best indicator is that `proxmox-backup-client list` (2.0.14) waits forever then timeouts trying to reach 2.3.3 server, while a 2.3.1 client works well.
No, it is not a question whether it would be optimal if everything would be up to...
Without browsing through the code, here's one example from Ceph.pm (line 645, or the very end):
code => sub {
my ($param) = @_;
PVE::Ceph::Tools::check_ceph_inited();
my $rados = PVE::RADOS->new();
my $rules = $rados->mon_command({ prefix => 'osd crush rule...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.