Search results

  1. A

    Low filetransfer performance in Linux Guest (ceph storage)

    Since KVM is singlethreaded and ceph is multi, I tried adding 4 new HDDs (VirtIO block) and did a raid 0 in Windows which gave me much better speeds. Next step is ceph cache tiering + ssd/nvme journaling or if you are really brave, bcache
  2. A

    Ceph storage mix SSD and HDD

    SSD cache pool for me did wonders, but I also have journals on separate SSD's. Followed the guide here: http://technik.blogs.nde.ag/2017/07/14/ceph-caching-for-image-pools/ You should also consider upgrading the network to a 10gb. And all disks should be in the pool as more disks = faster...
  3. A

    Where can I tune journal size of Ceph bluestore?

    A journal SSD with 1GB journal per TB slow drive. So about 8GB partitions for your 8TB drives. Also take a look at a cache layer, this did WONDERS for me. Here is the guide I followed, pretty simple: http://technik.blogs.nde.ag/2017/07/14/ceph-caching-for-image-pools/
  4. A

    proxmox ceph - public and cluster network

    I forgot to paste the public network as well, sorry for that, original post has been edited! After that, I set the cluster network IPs (172.16.0.11 then 12 etc, in my case) in the proxmox GUI, reboot and its done :)
  5. A

    proxmox ceph - public and cluster network

    This is what I have in my "vi /etc/ceph/ceph.conf " cluster network = 172.16.1.0/24 public network = 192.168.1.0/24 [mon.pve] host = pve mon addr = 192.168.1.12:6789 [mon.pve11] host = pve11 mon addr = 192.168.1.11:6789 [mon.pve3] host = pve3...
  6. A

    VM Start Timeout with PCI GPU

    Experiencing the same thing...not too often but often enough..
  7. A

    [Working notes] Ceph MDS on Proxmox 5.2

    Am posting this here in case anybody searches for this in the future. https://github.com/fulgerul/ceph_proxmox_scripts # # Install Ceph MDS on Proxmox 5.2 # ## On MDS Node 1 (name=pve11 / ip 192.168.1.11) mkdir /var/lib/ceph/mds/ceph-pve11 chown -R ceph:ceph /var/lib/ceph/mds/ceph-pve11 ceph...
  8. A

    VirtIO Drivers Windows 10

    If on Ceph, I would enable write back cache on the VMs disks for performance in KVM.
  9. A

    Proxmox 5.2 + Ceph 12 Upgrade to 10G/s

    Hi, Adding cluster network and reloading ceph should suffice. Here are the commands to do a live reload.. systemctl stop ceph\*.service ceph\*.target systemctl start ceph.target Specifying host is only needed for mons, unless you wanna be super picky about it and define hosts in osds as well...
  10. A

    Hyperconverged Infrastructure with Proxmox and Ceph

    I use 15gb SSD journals and for 6tb drives and I never go beyond 3gb for WAL+DB (Journal)...
  11. A

    TIFU - Removed RBD / Feature request ?

    Thanks for the answer spirit!
  12. A

    Site to site VPn for proxmox ?

    Hi, Thanks for the answer! I will try tinc and OVPN as well, just wanted to chck if anyone is running multi-site vpn + proxmox that might give me some gotchas! :)
  13. A

    Site to site VPn for proxmox ?

    I am expanding now and wonder if anyone has tested site-to-site ? There is an URL that uses tinc but I keep reading about bad speeds. So am wondering if anyone practices this? I have a wireguard vpn that maxes out my WANs but cannot for the life of me get multicast to work for now..
  14. A

    Ceph, MDS, CephFS questions

    +1 for ceph MDS support, I managed to get it up and running once but am back to errors! :(
  15. A

    CEPH SSD Pool

    A word of warning, running with 1 replica is not recommended!
  16. A

    Feature request: Names in DC->HA

    Its gets old fast to remove everything from HA and to readd all machines + name in details. Any way we could get names and not just IDs inside this view ?
  17. A

    TIFU - Removed RBD / Feature request ?

    TLDR; Forgot to set the Protected setting! Can we get this by default or give us the option to set this by default on all VMs ? Will KVM do multi-threaded RBDs in the future? I see some posts about it... So in my quest of faster speeds inside the VMs with ceph as the underlying storage, I got a...
  18. A

    [SOLVED] HA Mgr "old timestamp dead?"

    So I just wanted to share this issue that I've been having with my cluster. I had loads of RRD cache issues, so I had to reset a whole bunch of services, but got it working. Then the HA cluster stopped working. After a quick systemctl status pve-ha-crm.service I saw this... pve...
  19. A

    [SOLVED] ceph rbd error: rbd: list: (95) Operation not supported (500)

    So I had disabled cephx and then enabled it again. But still got the error (maybe pvestatd should check if cephx is enabled again). My solution for the "pvestatd: rados_connect failed - Operation not supported" was therefore the below cd /etc/pve/priv/ceph/old/ mv * ../ Now my proxmox GUI...
  20. A

    GPU Passthrough not working

    Hi, I would def try download that ROM file and put it in KVM as per https://pve.proxmox.com/wiki/Pci_passthrough#romfile A