Hi @aaron ,
Worked ike a charm, thank you soooo much !
One last question : in ProxMox UI, I have a warning about crashed mgr modules, it is related to the error I had but now everything is fine, how can I clear this log ?
Best regards
Hi,
I upgraded to Quincy and everything went fine except that I have this issue with the devicehealth module :
2022-08-14T14:06:33.206+0200 7f16e2549700 0 [devicehealth INFO root] creating main.db for devicehealth
2022-08-14T14:06:33.206+0200 7f16e2549700 -1 log_channel(cluster) log [ERR] ...
Wow, 512MB just for EFI ? Isn't it a bit big ? But ok, good to know.
Thanks for the maxvz option !
Maybe something more explicit would be welcome such as a simple checkbox "Create default LVM storage for VMs/CTs" or something to enable/disable this feature would be a nice addition.
Thanks...
That's a good point.
This is a question I wanted to ask too ... Is it possible to disable that LVM group for VMs completely at install time ?
It is useless to me since I'm using Ceph and other shared storage solutions so I'd like to remove this completely and reclaim that space for some other...
Damn ... you're right !
I was using a 8GB drive (this was the default value from Virtualobx and I thought this was enough for a base install).
Using a 12GB hard drive fixed the issue.
Thanks for your help.
Hi,
I just wanted to quickly setup a small lab to testbed some ProxMox setup within Virtualbox but the installation keeps failing whatever the settings I try.
I don't want to do something serious with this setup, this is just for test and demo purpose.
Here is my setup :
* Linux Ubuntu 19.10...
nope but this bug happen either after a living VM clone with more than 100GB of data to clone for some of them or after a failed OSD (not allways though)
I don't have logs, the clone itselfs is working without any issue, the VM is cloned properly and you can start it but if you want to restart it, you can't and you get a sysfs write error.
I find many reports on the ceph ML when searching these errors on Google ... seems not so uncommon...
The clone.
I think the prb mainly occures because we clone a "living" image with moving blocks and not a snapshot or a "cold" image.
One solution would be to create an automatic temporary snapshot before the clone, do the clone based on the snapshot then remove the temporary snapshot right...
Hi,
I think I found a corner case.
We have VMs running on Ceph in KRBD mode.
I have a user who cloned such VM but in a living state, neither from a stopped VM nor a snapshot.
I don't know why but it works except that when you want to remove the VM, when trying to remove the image, I get a sysfs...
+1 ... any idea on how to fix or mitigate this issue ?
It also happens in case of OSD loss even though I/Os pressure is much less intense with Luminous.
Hi,
I had many KVMs running in Ceph with very good performances but I decided to give a try to LXC on Ceph in order to benefit for near bare metal performances and lighter overhead on ressources but I was soon disappointed on the disk i/o side.
While in KVM (virtio-scsi + writeback cache mode)...
Thanks but I know this roadmap and this is more about upcoming release than real roadmap of longer term objectives ... and by now, there is nothing about multiple datacenter management in this document.
By now I would recommend Intel .. Ryzen is still pretty fresh and well supported only on very recent kernel releases such as 4.12 but ProxMox is based on 4.4 which is a bit old for Ryzen.
That might change in a near future but right now, I would stick with Intel.
Well, be carefull with consumer SSDs ... I had big troubles with Corsair LS ones (60Gb for less than 30€ each ... I thought they would do great rootfs disks but they died after a few months even though except for logs, they were not very busy ...).
On the other hand, I use Samsung 850 EVOs (not...
Hi,
Been using ProxMox for years and it is getting better and better but there is still no multi-datancenter capabilities and even though I would love to stick with ProxMox, I'm now in a situation where our infrastructure is getting bigger and bigger and too complex for one big ProxMox cluster...
Well, I would personnaly do this :
PNY SSD for system installation (I usually give 12GB to my ProxMox nodes with Ext4, it's far enough unless you want to install stuff on the hypervisor itself which I wouldn't recommend). You could use extra space for whatever you want.
Having no raid for the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.