Search results

  1. G

    ceph not working monitors and managers lost

    current config: [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.0.1.1/16 fsid = cfa7f7e5-64a7-48dd-bd77-466ff1e77bbb mon_allow_pool_delete = true mon_host = 10.0.1.2 10.0.1.1 10.0.1.6...
  2. G

    ceph not working monitors and managers lost

    @Tmanok didn't mean to sound unappreciative! I do need the help... so there is that. I have 9 nodes (8 that are active. Node 6 is out putting in new power supply so it is out of the cluster for now). They talk to each other fine... Node (Stack1-node8) all have a single 1TB HDD spinner that I...
  3. G

    Old server cluster - 6 x 1gb nic - best way to config?

    OK but assuming I dont want to redo everything and reinstall from scratch... just wondering where to change config or how it works to change it after its been on another subnet already... or how to change to different nic within config... oh well I will look some more
  4. G

    Old server cluster - 6 x 1gb nic - best way to config?

    how to define corosync and ceph affinity to specific nic?
  5. G

    ceph not working monitors and managers lost

    #1 - I have 9 nodes #2 - all nodes are plenty resources #4 - of course I rebooted all nodes... this happened after octo update to pacific with the automatic update and upgrade script.... Aside from generic info - your answer has no value to my case posted... ceph just hangs and mds and osd...
  6. G

    Ceph down after upgrade to Pacific

    any way to recover osd and get managers back and rescue map? Nodes can see each other fine - just missing managers for Ceph and no osd are showing up. ceph -s hangs timeout on any gui screen and most ceph commands root@node900:/etc/pve/nodes/node2# ha-manager status quorum OK master node5...
  7. G

    ceph not working monitors and managers lost

    Did you ever resolve this? I am having same issue. ceph -s jsut sites there and freezes timeout 500 on gui for ceph status page/dashboard config shows all the correct hosts for monitors and correct node ip proxmox node to node connectivity is fine - just ceph MANAGERS are missing and no OSDs...
  8. G

    Ceph Recovery after all monitors are lost?

    I somehow lost all my osd and map too - when I did pm gui update.. after reboot everything went to hell... any ideas on any of this? ceph osd setcrushmap -i backup-crushmap and just about any command for ceph just hangs and or times out... Monitors are listed but no quorum No OSDs are listed...
  9. G

    Ceph down after upgrade to Pacific

    To be honest - I did not even look to see what upgrades happened till it was too late. Octo to Pacific upgrade happened apparently with the automatic gui updates... I did not read the notes and now all my cluster ceph pool is dead as a rock. I noticed timeout after timeout... I manually...
  10. G

    Old server cluster - 6 x 1gb nic - best way to config?

    I have a bunch of older servers - almost all have a 4port x 1GB card and 2x onboard gb ports. right now I only am using one of the on board nics for all the nodes - one of the onboard ones... I have a linux bridge assigned vmbr0 to that on-board port and then all the VM's LXCs run over that...
  11. G

    Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1804.

    Hey, you're not alone.. been having same issues with pve-root maxing out.. seems something is getting stuck on recent upgrade... For me ceph log and other logs were HUGE and taking up all the space.. so I deleted the ceph log and removed the osd from that specific node... completed the update...
  12. G

    node root pve-root 100% full - delete log - autoremove now need to extend root

    So this caused all sorts of issues and node would not update then it froze up bad... restarted and it would not reconnect... got on local and saw it was up but out of root partition space... ssh to the node apt-get autoremove failed... no space everything I did failed... I found several...
  13. G

    VM HDD read write speed about 25% less than direct on node speed

    yes - I put that in the info above... Using the entire host 1TB HDD drive as OSD for CephPool1 (9 other nodes with 1TB drives on ceph osd and 1 machine with 8 more drives all setup as osd)... Created VM using CephPool1 for HDD - VirtIO SCSI - default, no cache on the VM HDD setup. wondering...
  14. G

    VM HDD read write speed about 25% less than direct on node speed

    Testing further - changed guest HDD to "emulate SSD" and it seems to have increased performance a bit... from 7900-8200 to now 8200-8650 Mb/sec is 3 to 5% improvement but not anywhere close to the direct access on the PM node at around 11,500... So about 25% less disk performance on VM than...
  15. G

    VM HDD read write speed about 25% less than direct on node speed

    Was looking today and noticed significant slower speeds on ubuntu guest VM for hdparam -Tt /dev/sda2 than for that same partition drive on the host node directly on console. This is directly on proxmox node to the attached SSD /sda2 This is directly on proxmox node to the attached SATA HDD...
  16. G

    VM to Template - how to share with others

    how does turnkeylinux package theirs and distribute via integrated download tool in gui? I want to make it simple stupid install...
  17. G

    VM to Template - how to share with others

    I spent the time to build a generic VM for a specific app using Ubuntu 21 iso. After hours of base setup I shut down the VM and "Convert to template" thinking I could then zip the template package up and share with others so it could save them 5 hours of basic setup... How do we do this? I...
  18. G

    8 node cluster - 4 working and see each other but lost quorum

    So what I did to get it running again: After reboot all machines - no quorum so all machines refused to start up VM's. I realized that the expected vote is the total joined machines in the cluster. Quorum is apparently defined as more than 50%. So - I am assuming a lot here from what I am...
  19. G

    8 node cluster - 4 working and see each other but lost quorum

    logging { debug: off to_syslog: yes } nodelist { node { name: node2 nodeid: 2 quorum_votes: 1 ring0_addr: 10.0.1.2 } node { name: node3 nodeid: 3 quorum_votes: 1 ring0_addr: 10.0.1.3 } node { name: node4 nodeid: 4 quorum_votes: 1...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!