Search results

  1. hepo

    One common additional disk for many vm

    What storage solution are you using? Let me recap: - both the VM and storage are on the same bridge (vmbr0) - you can access the storage from the network e.g. your laptop/desktop - you cannot access the storage from the VM?!? Can you access the VM form the network? Can you reach internet from...
  2. hepo

    [SOLVED] WebGUI Can get to Node WEBGUI, VM, or CT using https://192.168.1.xxx(:8006), but not https://hostname(:8006)

    I am not sure I understand what you mean, but cleaning the browser cache may be the way to go ;)
  3. hepo

    [SOLVED] WebGUI Can get to Node WEBGUI, VM, or CT using https://192.168.1.xxx(:8006), but not https://hostname(:8006)

    Designedly NC config... See config.php trusted domains https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/config_sample_php_parameters.html
  4. hepo

    One common additional disk for many vm

    Which vmbrige you have assigned on the storage and vm you want to access the storage?
  5. hepo

    Stretched Cluster (dual DC) with Ceph - disaster recovery

    I think I managed, looking for feedback... Changed "osd_pool_default_min_size = 1" in ceph.conf as well as changed the pool minsize to 1. This basically keeps the PG's in active state which allows HA to migrate the VMs and continue the services when DC1 failure occurs. Obviously the crush map...
  6. hepo

    Stretched Cluster (dual DC) with Ceph - disaster recovery

    Thanks for the comments Dominik, much appreciated. The split brain issue we are planning to address with 3rd datacenter (or VM that will VPN into the environment). I am failing to make the Ceph cluster operational after 3 nodes being down (DC1). I have updated the Crush Map - created two...
  7. hepo

    Stretched Cluster (dual DC) with Ceph - disaster recovery

    many thanks for the response! This will be the silver bullet for what we are trying to achieve... Do you happen to know if pacific adoption is on the roadmap? Thinking we can go live and adopt this functionality once available.
  8. hepo

    Stretched Cluster (dual DC) with Ceph - disaster recovery

    Dear Proxmox Team, I would really appreciate some brain cells here... please I am trying to follow the Stretched Cluster instructions from the Ceph docs - https://docs.ceph.com/en/latest/rados/operations/stretch-mode/#stretch-clusters However, most of the commands do not work/not recognised...
  9. hepo

    Stretched Cluster (dual DC) with Ceph - disaster recovery

    Looks like I am the only one positing here... never mind I will continue o_O Updating the buckets in the crush map turned out to be very simple, very well described here -...
  10. hepo

    Stretched Cluster (dual DC) with Ceph - disaster recovery

    This is very similar, not to say identical to the issue I have - https://forum.proxmox.com/threads/problem-with-ceph-cluster-without-quorum.81123/ At this stage I have deployed a 7th node on a VM that I can backup and make redundant in both DC's. Rebuild the whole cluster and testing again...
  11. hepo

    Server Downing From Transfer

    https://pve.proxmox.com/wiki/Separate_Cluster_Network
  12. hepo

    RAID hardware vs RAID software

    Asking for friend ;) He is rending Fujitsu bare metal box with PRAID EP420i (LSA 3108). The disk are configured as JBOD's on the controller, there is no battery. What would you recon, will this be an issue for ZFS? He asked to remove the RAID controller but was declined by the renting...
  13. hepo

    Server Downing From Transfer

    Maybe watchdog functionality, if the network is saturate and corosync cannot operate correctly, the watchdog will kick in (automatic reboot when the watchdog counter runs out). Stop the network interface on which corosync operates, few seconds later the server will reboot. The best practice is...
  14. hepo

    CEPH PG Data Recovery / PG Down

    This may be a good read, not sure if it addresses your issue but still would like to share - https://ceph.io/planet/recovering-from-a-complete-node-failure/
  15. hepo

    Problem with ceph cluster without quorum

    @Douglas did you managed to test this and what results did you get? I am looking for the same setup and would like to pick your brain/experience. Thanks!
  16. hepo

    [SOLVED] dns.hostname reading wrong!

    proxmox mail gateway has it own section, not sure you will get help here ;) the domain needs to change - fqdn=hostname.domain domain needs to be xxx.com only
  17. hepo

    [SOLVED] Cant access proxmox webgui

    no comment, literally the second reply top down... sorry for that :rolleyes:
  18. hepo

    zpool import -a persistent

    wander if you used google for this? - https://askubuntu.com/questions/123126/how-do-i-mount-a-zfs-pool
  19. hepo

    One common additional disk for many vm

    external storage with NFS mount to both VMs will be the solution that come's to mind. or "storage VM" that acts as NFS server so the others can connect to it... one stores, other reads
  20. hepo

    [SOLVED] Cant access proxmox webgui

    I am not the most experienced person here - look at my badge ;) I saw you already checked listener on on 8006 exists but did you also tried curl -k https://localhost:8006 If the html content is returned then the pveproxy service is working correctly, the only other option would be FW or weird...