Recent content by Nemesiz

  1. N

    Struggling to expose Proxmox VMs publicly via WireGuard + VPS routed IPs (routing loop & connectivity issues)

    If I wanted to trick VPS provider I would look does he use ARP. In that case I would use L2 VPN (wireguard is only L3): Cloudzy VPS eth0 -> bridge with VPN -> VPN L2 -> Proxmox -> bridge with VPS -> VPS
  2. N

    cron job not executed

    VM OS ignored shutdown or you haven't installed qemu agent. If you really need to stop VM you can use "qm stop" to kill it.
  3. N

    SDN VLAN zone

    VNet. Personally I`m not sure but lets try.
  4. N

    SDN VLAN zone

    Try to enable "VLAN Aware"
  5. N

    SDN VLAN zone

    Can you show VM network configuration ?
  6. N

    Ceph freeze when a node reboots on Proxmox cluster

    5 MON - A.I suggest to use this number only on really huge setup. 1 MON - on single ceph machine. 2 MON - no quorum 3 MON - regular setup 4 MON - no quorum? Try to lower MON setup to 3.
  7. N

    Is it possible to create replication rule that uses two osd classes and uses them in equal manner?

    I suggest you take a look at these links https://www.osris.org/article/2019/03/01/ceph-osd-site-affinity https://ceph.io/en/news/blog/2015/crushmap-example-of-a-hierarchical-cluster-map/
  8. N

    Ceph - Reduced data availability: 3 pgs inactive

    How OSD lose PG objects ? Who cause it ?
  9. N

    Ceph freeze when a node reboots on Proxmox cluster

    In maintenance time I set noout norebalance norecover flags before OSD/server shutdown. It stops from moving data around others OSD. In some Ceph talks was mentioned that single HDD can impact all cluster event HDD SMART will not show any evidence of coming HDD death. So you must track of disk...
  10. N

    Legacy-Boot with ZFS-Root on Supermicro X8DT3 (Proxmox VE 9.0, no UEFI)

    Proxmox installation is not so rich with option. Try to install debian as you wish and then convert to Proxmox.
  11. N

    ZFS pool won’t import after switching from /dev/sdx to /dev/disk/by-id – mixed vdev paths

    Hi, have you tried to run 'zpool import -d /dev/disk/by-id' to see what zfs sees ?
  12. N

    Ceph - feasible for Clustered MSSQL?

    You have to ask what Ceph will solve for you. Scalable, failure .... If you decide to use Ceph in my short experience and other people suggestions: 1. Get fast network as possible. Network latency have big role. 2. Get SSD/NVME enterprise grade to survive the load. 3. More CPU cores not always...