Search results

  1. G

    VM HDD read write speed about 25% less than direct on node speed

    yes - I put that in the info above... Using the entire host 1TB HDD drive as OSD for CephPool1 (9 other nodes with 1TB drives on ceph osd and 1 machine with 8 more drives all setup as osd)... Created VM using CephPool1 for HDD - VirtIO SCSI - default, no cache on the VM HDD setup. wondering...
  2. G

    VM HDD read write speed about 25% less than direct on node speed

    Testing further - changed guest HDD to "emulate SSD" and it seems to have increased performance a bit... from 7900-8200 to now 8200-8650 Mb/sec is 3 to 5% improvement but not anywhere close to the direct access on the PM node at around 11,500... So about 25% less disk performance on VM than...
  3. G

    VM HDD read write speed about 25% less than direct on node speed

    Was looking today and noticed significant slower speeds on ubuntu guest VM for hdparam -Tt /dev/sda2 than for that same partition drive on the host node directly on console. This is directly on proxmox node to the attached SSD /sda2 This is directly on proxmox node to the attached SATA HDD...
  4. G

    VM to Template - how to share with others

    how does turnkeylinux package theirs and distribute via integrated download tool in gui? I want to make it simple stupid install...
  5. G

    VM to Template - how to share with others

    I spent the time to build a generic VM for a specific app using Ubuntu 21 iso. After hours of base setup I shut down the VM and "Convert to template" thinking I could then zip the template package up and share with others so it could save them 5 hours of basic setup... How do we do this? I...
  6. G

    8 node cluster - 4 working and see each other but lost quorum

    So what I did to get it running again: After reboot all machines - no quorum so all machines refused to start up VM's. I realized that the expected vote is the total joined machines in the cluster. Quorum is apparently defined as more than 50%. So - I am assuming a lot here from what I am...
  7. G

    8 node cluster - 4 working and see each other but lost quorum

    logging { debug: off to_syslog: yes } nodelist { node { name: node2 nodeid: 2 quorum_votes: 1 ring0_addr: 10.0.1.2 } node { name: node3 nodeid: 3 quorum_votes: 1 ring0_addr: 10.0.1.3 } node { name: node4 nodeid: 4 quorum_votes: 1...
  8. G

    8 node cluster - 4 working and see each other but lost quorum

    sorry 9th node was never joined.. lol that makes sense...
  9. G

    8 node cluster - 4 working and see each other but lost quorum

    sorry for multiple reply question - it would not let me post all that info.. too many characters... seems you all ask for that info when helping to troubleshoot anyhow...
  10. G

    8 node cluster - 4 working and see each other but lost quorum

    So I checked corosync-quorumtool root@stack1:~# pvecm expected 3 Unable to set expected votes: CS_ERR_INVALID_PARAM root@stack1:~# corosync-quorumtool Quorum information ------------------ Date: Sun Jul 11 23:53:40 2021 Quorum provider: corosync_votequorum Nodes: 4 Node...
  11. G

    8 node cluster - 4 working and see each other but lost quorum

    anyhow... root@stack1:~# systemctl status corosync ● corosync.service - Corosync Cluster Engine Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2021-07-11 23:35:16 CDT; 13min ago Docs: man:corosync...
  12. G

    8 node cluster - 4 working and see each other but lost quorum

    Not sure what happened when I updated things a while back but I lost 5 of the 9 nodes... nothing special on any of them.. I managed to get the vm's live again from replicated data on other nodes and they are all working... I let it be for a couple months and have not looked at it since...
  13. G

    CEPH on node upgrade issue

    my goal is to get rid of the ddwrt and replace with 2 node failover router setup of some sort.. I have all those dell r210ii servers with 6 nic cards I was hoping to go bare metal pf/opnsense or some firewall like option or something with better failover and threat detenction ClearOs... who...
  14. G

    CEPH on node upgrade issue

    Yeah - I can get bridge to work for private net ip ranges... but not figured out how to send the public ip address thru to the VM... I have ISP Fiber modem - switch - DDWRT - SWITCH - Cluster nodes ISP Modem in passthru (but also serves own 192.168.1.x clients direct connected ) DDWRT WAN is...
  15. G

    CEPH on node upgrade issue

    So I guess my questions is do I configure eno1 as the node IP and then vmbr as the virtual machine requested IP? What I want is each node to be 10.0.1.1 thru 10.0.1.8 and each VM to be 10.0.1.101 thru XXX I lose connection to the node if I set eno1 to a specific IP like 10.0.1.5 for node 5 but...
  16. G

    CEPH on node upgrade issue

    so how do I do that with the above? any setup you recommend?
  17. G

    CEPH on node upgrade issue

    Any suggestions for setup? Dell r210ii server 2 on board nic 4 on card nic (see above)... I have had a fit with trying to figure out best setup to stack these and use ceph or management bond over a pair to speed up data or separate the management from the public/behind router private lan...
  18. G

    CEPH on node upgrade issue

    Since update on one of my nodes it has become a bit unstable.. well 3 of my nodes - but they have different issues. Some dealing with network problems... For node 5 - I have ceph monitor that is showing as "undefined" in the list of monitors on any node I connect to the gui with. root@node5:~#...
  19. G

    new server to existing cluster

    Hi Tom, sorry to hijack this years later - do you have simple walk thru for this? I am new to PM and had similar issues with node after updating to ifupdown2 and change node ip I believe. broken connection to cluster and now I am stuck with 6 nic cards not talking correctly. They show up no...