Recent content by Zombie

  1. Z

    Very slow VM on Dell R620

    I would suggest what UdoB posted. It sounds like you re-did the zfs pool just as a zfs mirror 10, but should try breaking it into the four vdevs striped which will give great performance. Also what type of hard drives are you using? Consumer or enterprise?
  2. Z

    High latency on Ceph. Poor Performance for VMs

    Just glancing at this, upgrade minimum for the ceph public and ceph cluster network to at least 10g.
  3. Z

    AMD 7900 XT or XTX pci passthrough

    We’re you able to get this to work with the amd gpu? If so can you provide what worked? Thanks
  4. Z

    [SOLVED] Is linux bridged needed for this or not?

    So with the help of a forum member I was able to bond my two 10G nics and set it up on the switch! I also was able to make 2 vlans for ceph public and ceph cluster. Then I was able to make a 3rd vlan using this bond for a migration network!
  5. Z

    [SOLVED] Is linux bridged needed for this or not?

    Thank you! That is what I needed to know! I was thinking about bonding them for redundancy (i have a capable switch brocade icx 6610), but not 100% how to set it all up with a bond.
  6. Z

    [SOLVED] Is linux bridged needed for this or not?

    So hopefully this is an easy question someone can explain to me, I am not the best when it comes to network and networking. I have four nics on each server 2x1G and 2x10G. I have a vmbr0 for promxox management 192.168.x.x Now I am going to be setting up ceph for the two 10G ports. My...
  7. Z

    CephFS pool crush rule change question

    Thank you! Going to start it in the morning!
  8. Z

    CephFS pool crush rule change question

    So I have never had to do this so I thought I would ask to make sure it will work and I don't lose data on the cephFS pool. First I have been replacing my hdd OSD's with SSD's. I have moved my VM/CT storage to the new crush rule for the SSD's. Now the part I am not 100% certain about is that...
  9. Z

    Network setup review

    So currently I am running a 4 node proxmox ceph cluster (adding a 5th in two months). The hardware is the same on all the servers. I have seen a lot of different posts about the ceph/network/management network/corosync network/vm network/and last but not least backup network. I am wanting...
  10. Z

    Grafana Proxmox Dashboards

    Thanks for sharing @H4R0 ! These are great and can build on them!
  11. Z

    Abysmal Ceph write performance

    Are those consumer ssd’s? If so that might be part of the problem. There is a lot of posts here on the forums about consumer vs enterprise when it comes to ceph.
  12. Z

    Grafana Proxmox Dashboards

    So I was looking through my influx db that is collecting the stats from my proxmox cluster and wondering if I am missing the status or it just doesn't get sent over. I am looking for the ceph metrics but am not seeing them. Is there an alternative to collect them if they are not being sent...
  13. Z

    Nextcloud hdd setup

    So based on your statement that unprivileged containers are not safe to use for any forward facing web services? Care to elaborate more because I do disagree but maybe I just don’t know enough so if you could expand why would be great.
  14. Z

    Nextcloud hdd setup

    Just create lxc container and use a mount point to the zpool. Will be much better to manage the data in the long run. It truly runs great, been doing it like this for a long time and there is less abstract layers!
  15. Z

    All LXCs and VMs fail to start

    It may not be @bbgeek17 but it’s just strange that the issue popped up after the NAS change but it could be related to the issue you linked above after the NAS. Just strange it cropped up after it. I don’t know unless we can look at some logs.