Search results

  1. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    May want to change the VM disk cache to none. I got a significant increase in IOPS from writeback. I also have WCE (write-cache enabled) on the SAS drives. Set it with "sdparm --set WCE --save /dev/sd[x]"
  2. J

    12 Node CEPH How Many Nodes Can Be Fail ?

    Don't know the answer to your question but I thought you needed an odd number of nodes for quorum? For example, I had a 4 node Ceph cluster but I use a QDevice for quorum https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
  3. J

    Proxmox/Ceph hardware and setup

    It's considered best practice to have 2 physically separate cluster (Corosync) links obviously connected to 2 different switches. Corosync wants low latency not bandwidth, so 2x1GbE.
  4. J

    Set write_cache for ceph disks persistent

    If you have SAS drives, you can run the following command as root: sdparm --set=WCE --save /dev/sd[X]. To confirm write cache is enabled as root, run: sdparm --get=WCE /dev/sd[X] 1 = on 0 = off
  5. J

    Advice for new Hyper converged platform

    I currently have 2 Proxmox Ceph clusters. One is 3 x 1U 8-bay SAS servers using a full-mesh network (2 x 1GbE bonded). 2 of the drives bays are ZFS mirrored for Proxmox itself and the rest of the drive bays are OSDs (18 total). Works very well for 12-year old hardware. This is a stage cluster...
  6. J

    Firewall ports to allow when Proxmox and PBS on different networks?

    I have the following network setup: 192.168.1.0/24 VLAN 10 pve1.host.local 192.168.1.11/24 pve2.host.local 192.168.1.12/24 pve3.host.local 192.168.1.13/24 192.168.2.0/24 VLAN 20 pbs.guest.local 192.168.2.254/24 Each VLAN is protected by a firewall. Per...
  7. J

    need hardware recommendation for 3 Node Cluster

    If you are open to used servers, head on over to labgopher.com Best bang for the buck are the Dell 12-generation servers, i.e., R620/R720. However, I run Proxmox Ceph on 10-year server hardware. Works very well.
  8. J

    VirtIO vs SCSI

    According to this post https://forum.proxmox.com/threads/virtio-scsi-vs-virtio-scsi-single.28426 virtio uses a single controller per disk just like for virtio scsi single. As to which one is "faster", no idea.
  9. J

    [SOLVED] Proxmoxer proxmox_kvm ignore urllib3 self-signed cert error?

    This is fixed. Was several issues. First was updating Ansible to latest version to use the Proxmoxer pip module to support the Proxmox 6.x APIs to create VMs. Second was that the behavior of the "connection: local" being used playbook-wide has changed since Ansible 2.8.5. I add to add...
  10. J

    [SOLVED] Proxmoxer proxmox_kvm ignore urllib3 self-signed cert error?

    I forgot the command-line kung-fu to tell urllib3 to ignore the self-signed certs on the Proxmox hosts. I have the same error as this post https://learn.redhat.com/t5/Automation-Management-Ansible/Ansible-proxmox-kvm-module/td-p/3935 Anyone remember the command-line or environment variable to...
  11. J

    Proxmox VE 6.0 beta released!

    I'm guessing I have to fill out an issue request to proxmoxer maintainers?
  12. J

    Proxmox VE 6.0 beta released!

    Well, the fun continues. Before I did a clean install of 6.0 beta, I backed up my ansible playbooks from 5.4 VE which worked. So did a 'apt-get install ansible' and 'apt-get python-pip. Then did a 'pip install proxmoxer' When I run the playbook to create a VM, I get the following error...
  13. J

    A short story on a cluster failing

    I thought quorum requires an odd number of nodes to avoid split brain situations?
  14. J

    Proxmox VE 6.0 beta released!

    Can someone else confirm that /usr/sbin/ceph-disk exists? It shows up in 'apt-file list ceph-osd' but not in /usr/sbin. I did check my other 2 nodes and no ceph-disk. I also did a 'apt-get install --reinstall ceph-osd' but still no ceph-disk.
  15. J

    Storage Recommendation Needed

    I'm running a proof-of-concept full-mesh 3-node Ceph cluster with identical servers. Odd number needed for quorum. It's currently running Proxmox 5.4 but will be installing 6.0 beta for performance optimizations. Since an odd number of nodes is needed for a quorum, create either a 3 or 5 node...
  16. J

    PROXMOX on R720xd - Home Server Build - Looking for suggestions

    The mini mono H310 supposedly can be flashed to IT mode https://www.youtube.com/watch?v=Y1Xi5NZRlXM and https://www.reddit.com/r/homelab/comments/bkxszi/flashing_the_h310_mono_mini_to_it_mode/ Here are the some that are already flashed to IT mode...
  17. J

    CEPH Write Performance

    I believe WD Reds are SATA drives? If so, you may want to confirm if write caching is enabled with the 'hdparm' command. I don't use SATA. I use SAS drives, which uses the 'sdparm' command.
  18. J

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    I don't have Dells but I do have 8-drive SunFires just as old as the R610s. They don't have 10GbE but do have full mesh 1GbE setup https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server Since you can configure Ceph via the GUI, I suggest you go with that...
  19. J

    CephFS trough Proxmox GUI - No VM storage possible?

    I did everything through the GUI. The steps I did were: 1) Create CephFS first https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes#pveceph_fs. This will create two pools called cephfs_data and cephfs_data 2) Create RBD by clicking on Datacenter -> Add -> RBD. It should...
  20. J

    Ansible proxmox_kvm gettting MAC address for PXE

    Thanks to morph027, the MAC address is returned upon VM creation. Here's my Ansible VM Create task: tasks: - name: VM create proxmox_kvm: api_user: "root@pam" api_password: "{{ api_password }}" api_host: "{{ api_host }}" node: "{{ node }}" name: "{{...