Search results

  1. Ceph Hyperconverged on Blade servers

    YEs alex, i also have 2 CH222 with 14 Drives Each. I could spinup the ceph server there? Yes de consumption is just idle.
  2. Ceph Hyperconverged on Blade servers

    Beacuse those server were already bought and they offer 150 Watts consumption/Blade Total 2.2Kw Chassi for sppining up 512RAM/Blade (Total 7.2Tb RAM) and come with 2x24 Cores CPUS by Blade, total of 672 cores 3.2Ghz. Energy efficency is really good and also We have two NFS outside for Backups...
  3. Ceph Hyperconverged on Blade servers

    Aaron, in the case scenario: E9000 Huawei with 7Blades CH121 (2Disks per Blade (one disk for Proxmox OS + one disk for CephOSD)) wich would be an appropriate size/min settings? PowerSupply and Switch is centralized for this type of system (6 PSU + 2Switches) I supouse 1 node failure is to...
  4. Ceph Hyperconverged on Blade servers

    Unfortunetly this didn't get any answer. But i'll tell you my case today. We just bought a bunch of Huawei E9000 that looks equivalent to this setup. We have 14 blade per/chassis with the same setup 2.5in disks. One possibility is to use NVMe drives as here in this documentation is allegedly...
  5. Proxmox 7.1.2 installation fail on Huawei RH2288H V3

    Yes the Same for the entire MultiBlade E9000 for High Density. This is really a must for the next version, all big data-centers are now migrating from Dell and HP to Huawei and IBM because of the cost and shipping from China being the best. CH121 CH 222 And al the blades compatible with the...
  6. Is anyone here actually running kubernetes on their LXC containers?

    This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes. Also Having proxmox and Ceph is a must not only by the reliability but also...
  7. [SOLVED] How to remove Datastore (Directory)

    What effective answer thanks. At least
  8. Resize LXC DISK on Proxmox

    Is funny that the wuestion was about LXC resize and not VM resize, also the disk if it is a Ceph one the procedure is something and if it inside lvm-local is comething completly different. The lazyone works doubled
  9. VM Fails to start: "Failed to start Boot0001 "UEFI QEMU DVD-ROM"" - GPU Passthrough VM Config Setup

    So this is the 5th topic i find in the Proxmox forum related to the problem of booting non-linux iso in a VM. There is also this: This: this...
  10. Backup job finished with errors

    @Fabian_E sorry buddy, i just forgot it sorry. Im liking a lot to see the improvments being done in Proxmox, have been using him since Last yeas and on large scale since 2016. THanks for the effort.
  11. Backup job finished with errors

    @tom i have the same issue on Proxmox 7.1.5 on machines that disks are on Ceph. Normally when the disk are larger than 200gb i have the same problem. Just realized that. Can someone tellme why the backup job finished with errors altought the image is made?
  12. Backup progress percent or bar or else?

    Very very good idea to show the pve vzdump progress in the GUI I think is a simple thing as this would be something to pipe the data to a subsystem that can show it in the Sender Side. The backup side really doesnt matter as if this feature is implemented in the sender side it would always work...
  13. Change ceph network

    Sorry for my words, but dealing with I.T infrastructure of Healthcare during Covid pandemic and having to deal with Ceph Network change had been very stresfull for me this last months in Brazil. Didn't wanted to offend anyone. Im trying to change Ceph Network to work under a 10Gbps Switch...
  14. Change ceph network

    I Already checked the entire documentation and im not able to find a way to migrate my entire Ceph Architecture to the new 10Gbps NICs. Also those NIC are running on a different Network and not on like before. Can you please share to me the way, or any link that could...
  15. Change ceph network

    Wow this is so bad, Proxmox should add this features for the next versions. Not being able to upgrade CEPH on a easy way, making it a pain in the ass to open a 12 Nodes setup to reconfigure it monitor IP addres is horrible.
  16. Problem with consistent high io load (~12%) on node in CEPH cluster

    I Have PVE 6.3.2 and still having this issue, Same settings as the buddy above. ALSO NFS related
  17. Node with question mark

    2.4.4 I have the same Problem
  18. Does you know how to fix this problem ?

    In my case is the same, I had a disk faiilure on one of my nodes. Unfortunetly in this install i didn't had a RAID controller over. So i had to reinstall proxmox on another disk. (I still have my ceph disks untouched). I need to re add this node to the cluster (now i pressume it has a new...
  19. Nodes unable to maintain communication causing Ceph to fail

    Please can you post what says on the isolated node shell?
  20. "kvm Tainted"

    Did you solve the probem?


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!