Beacuse those server were already bought and they offer 150 Watts consumption/Blade Total 2.2Kw Chassi for sppining up 512RAM/Blade (Total 7.2Tb RAM) and come with 2x24 Cores CPUS by Blade, total of 672 cores 3.2Ghz. Energy efficency is really good and also We have two NFS outside for Backups...
Aaron, in the case scenario:
E9000 Huawei with 7Blades CH121 (2Disks per Blade (one disk for Proxmox OS + one disk for CephOSD)) wich would be an appropriate size/min settings?
PowerSupply and Switch is centralized for this type of system (6 PSU + 2Switches) I supouse 1 node failure is to...
Unfortunetly this didn't get any answer. But i'll tell you my case today.
We just bought a bunch of Huawei E9000 that looks equivalent to this setup. We have 14 blade per/chassis with the same setup 2.5in disks.
One possibility is to use NVMe drives as here in this documentation is allegedly...
Yes the Same for the entire MultiBlade E9000 for High Density. This is really a must for the next version, all big data-centers are now migrating from Dell and HP to Huawei and IBM because of the cost and shipping from China being the best.
CH121
CH 222
And al the blades compatible with the...
This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes.
Also Having proxmox and Ceph is a must not only by the reliability but also...
Is funny that the wuestion was about LXC resize and not VM resize, also the disk if it is a Ceph one the procedure is something and if it inside lvm-local is comething completly different.
The lazyone works doubled
So this is the 5th topic i find in the Proxmox forum related to the problem of booting non-linux iso in a VM.
There is also this: https://forum.proxmox.com/threads/unable-to-boot-ovmf-vm-on-proxmox-6.60424/
This: https://forum.proxmox.com/threads/unable-to-boot-ovmf-vm-on-proxmox-6.60424/
this...
@Fabian_E sorry buddy, i just forgot it sorry. Im liking a lot to see the improvments being done in Proxmox, have been using him since Last yeas and on large scale since 2016. THanks for the effort. https://www.proxmox.com/en/proxmox-ve/testimonials/item/kimenz-equipamentos
@tom i have the same issue on Proxmox 7.1.5 on machines that disks are on Ceph. Normally when the disk are larger than 200gb i have the same problem. Just realized that. Can someone tellme why the backup job finished with errors altought the image is made?
Very very good idea to show the pve vzdump progress in the GUI
I think is a simple thing as this would be something to pipe the data to a subsystem that can show it in the Sender Side. The backup side really doesnt matter as if this feature is implemented in the sender side it would always work...
Sorry for my words, but dealing with I.T infrastructure of Healthcare during Covid pandemic and having to deal with Ceph Network change had been very stresfull for me this last months in Brazil. Didn't wanted to offend anyone.
Im trying to change Ceph Network to work under a 10Gbps Switch...
I Already checked the entire documentation and im not able to find a way to migrate my entire Ceph Architecture to the new 10Gbps NICs.
Also those NIC are running on a different Network 10.0.0.0 and not on 192.168.0.0 like before.
Can you please share to me the way, or any link that could...
Wow this is so bad, Proxmox should add this features for the next versions. Not being able to upgrade CEPH on a easy way, making it a pain in the ass to open a 12 Nodes setup to reconfigure it monitor IP addres is horrible.
In my case is the same, I had a disk faiilure on one of my nodes. Unfortunetly in this install i didn't had a RAID controller over. So i had to reinstall proxmox on another disk. (I still have my ceph disks untouched).
I need to re add this node to the cluster (now i pressume it has a new...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.