Search results

  1. Ceph and a Datacenter failure

    I think scheduled replication would be ideal so that way the data centers are independent of each other so you don't lose everything in one shot.
  2. How can I change the num-replicas on ceph pool online? Need to go from 6/3 to 3/2

    Ok, I stand corrected. Still I would make backups first.
  3. How can I change the num-replicas on ceph pool online? Need to go from 6/3 to 3/2

    You can always increase the numbers but not decrease so only way to fix this is create a new storage pool and migrate the VMs and containers to it. Then destroy the old pool to reclaim the disk space. As always make a backup of your VMs and containers on a separate disk first before doing...
  4. Proper Maintenance of a Node

    I just noticed in the ceph.conf that it was missing pve7 server to be listed as one of the 4 monitors even though I do see it as being active in the WebGUI. Don't know if adding this would have made a difference. [mon.pve7] host = pve7 mon_addr = 10.50.10.242:6789,10.40.10.242:6789
  5. Proper Maintenance of a Node

    I had a scare with my cluster when I was doing firmware updates on one of my Dell PowerEdge R7415 servers which took 30 mins to complete. I had expected the Ceph cluster to recover on it's own but it caused some VMs to stall and two nodes became inaccessible. Once the downed node was back up...
  6. Proxmox 6 mpt3sas Debian Bug ICO Report 926202 with Loaded Controller

    I am having the same issue. I have 4 nodes running PVE 6.0-7 and once in awhile the server would freeze and then reboot on it's own. Looking through the log I see the same errors as you posted above. It's a Dell PowerEdge R7415 with non-raid Dell HBA330 Mini (Embedded) Firmware 16.17.00.03...
  7. Cloud-init on ubuntu NOT using ubuntu cloud images.

    Another thing you may want to is remove the machine-id in /etc/machine-id so it'll be unique in each clone generation. cat /etc/machine-id rm /etc/machine-id touch /etc/machine-id I don't know if cloud-init does that automatically but least doing this wouldn't hurt before converting the image...
  8. Out server crashed in production while live migrating.

    Whenever I upgrade the PVE nodes I manually move the live VMs onto another node and then upgrade the empty node. I know some just upgrade and reboot to let HA handle the migrations but I only let HA handle it if the node actually failed unexpectedly. This way I make sure the migrations are...
  9. Epyc Zen 2 with Proxmox 5.4-13

    I can tell you both PVE 5.4 and 6.0-7 are running great with CPU(s) 64 x AMD EPYC 7551P 32-Core Processor (1 Socket). Couple of my servers have 512GB of ram and smaller RAM in other servers with total 4 in a cluster.
  10. Proxmox VE 6.0 released!

    Figured out the problems and mostly my fault for not paying close attention to certain issues after the Proxmox upgrade from version 5.4 to 6. Even though my test environment upgraded without issues but it didn't have the 10 gig network cards which is what being used for Ceph-Cluster. I...
  11. Proxmox VE 6.0 released!

    I've upgraded my test environment which worked without issues. Then upgraded the production environment and getting this error auth: unable to find a keyring on /etc/pve/priv/ceph.mon.pve4.keyring: (13) Permission denied and ceph daemon isn't starting. It's happening on all of my 3 nodes...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!