Recent content by apap

  1. A

    CEPH Public Cluster Network Basic Setup Question

    It is a network issue. I had set the vlan on the switch wrong. Which leads to the next question. I understand that it is best if CEPH Public and Cluster dataflow is on a separate network (or VLAN). What impact would it have if they were on the same VLAN/Subnet. There is basically only 4 IPs...
  2. A

    CEPH Public Cluster Network Basic Setup Question

    Beginner on Proxmox, CEPH and Linux in general. Question on basic behavior. Node 1: Mgmt IP - 192.168.86.111/24 CEPH Public Net IP - 10.10.10.11/24 (no gateway) CEPH Cluster Net IP - 10.10.20.11/24 (no gateway) Node 2: Mgmt IP - 192.168.86.112/24 CEPH Public Net IP - 10.10.10.12/24 (no...
  3. A

    Installing qemu-agent on windows server core (Hyper-V 2019 core in my case)

    I've finally discovered how to install qemu-agent on windows server core. Please see the method below: Sourced from here: https://gist.github.com/goffinet/36e6581670cde2d016f620f6f490d281 Powershell in or to windows server core: # display available drives Get-PSDrive # create local driver...
  4. A

    CEPH Annoying Warning

    This is exactly it. Solved. Thanks!
  5. A

    CEPH Annoying Warning

    I've managed to get CEPH working. Exciting stuff! I have this annoying warning constantly present: I assume it comes from this: Can I increase the PG of .mgr to 2? Will it break anything? What is the best practice for this.
  6. A

    CEPH or Cluster first

    Thank you. If I have a 3 node ceph/pve, do you recommend every node to be a monitor?
  7. A

    CEPH or Cluster first

    I am trialing 3 nodes PVE with cluster and CEPH. Should I setup the cluster first or CEPH first.
  8. A

    3 Network Address for Proxmox Cluster with CEPH?

    The wiki (https://pve.proxmox.com/wiki/Network_Configuration) states: "If you intend to run your cluster network on the bonding interfaces, then you have to use active-passive mode on the bonding interfaces, other modes are unsupported." Does this mean if the ip address is set directly on...
  9. A

    3 Network Address for Proxmox Cluster with CEPH?

    Trying to understand the basic network setup for proxmox cluster with CEPH. Say I have 6 physical nics: eno1 - 6 I would need to have 3 bonds (assume lacp bond): bond0, eno1 eno2, lacp, layer 2+3 bond1, eno3 eno4, lacp, layer 2+3 bond2, eno5 eno6, lacp, layer 2+3 I would also need to have 3...
  10. A

    Ceph SSD Cache for HDD Array. Which pool do I add to proxmox? SSD or HDD

    I've thought about what you said here. With another person's comment somewhere else, this warning finally sunk home. I will repeat my conclusion as it may help other beginners to PVE and CEPH. With RBD storage, cache tiering is not recommended. Excerpt from comment by another regarding this...
  11. A

    Ceph SSD Cache for HDD Array. Which pool do I add to proxmox? SSD or HDD

    Thank you for the answer. However, your answer would apply if I do not use ceph ssd cache feature. In ceph caching scenario, we have 2 pools that are related/overlayed, a ssd pool overlaying over the hdd pool. The question is, in proxmox, which pool do I add as storage, the ssd or the hdd pool?
  12. A

    Ceph SSD Cache for HDD Array. Which pool do I add to proxmox? SSD or HDD

    I am a beginner to proxmox and ceph. Currently in the middle of testing for deployment to a production environment. What I've done (hopefully to help others puzzling over the same issue as well as to verify that my work is correct): 1. Install PVE and Activate CEPH 2. Added 2 x SSD drives...
  13. A

    RAID1 LVM Directory as VM Storage? Size mismatch between PVE GUI and console.

    Due to other posts online stating that RAID1 ZFS requires gobs of system memory, I plan to put my VMs in an RAID1 LVM setup instead. I would then mount this RAID1 LV as a directory in datacenter storage. Is this an acceptable method of running VMs in a production environment? ---- Testing PVE...
  14. A

    VM Disk - ZFS or LVM

    I tried to read the wiki and old threads but I still have questions. Basic condition: My PVE server is RAM limited (16GB non-ECC). PVE is running on 2 x 120GB SATA SSD Storage is 1 x 8TB SATA HDD, will add another 8TB HDD for Mirror in the future. 1. ZFS question: Other posts have said that...