Search results

  1. hepo

    Improve VM restore speed - ZFS datastore over NFS

    no one that have dealt with such issue issue before? I am sure there a plenty people having there datastores on a NFS share and may have ran into this. NFS share on a ZFS pool may be a bit exotic in the community though...
  2. hepo

    Improve VM restore speed - ZFS datastore over NFS

    Greetings to all, Need an advise please... We are trying to setup 2nd site (DR) on the cheap. We have a TrueNAS Core server with 40TB HDD (z2) pool. The server has 32cores and 64GB RAM. The PBS datastore is on the TrueNAS, connected over NFS. We have synced (sync job) the backups from the...
  3. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Thanks for chiming-in @VictorSTS, appreciated
  4. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    You are the best! It is clear that we are not comparable. Gives me a perspective. We have started the process of exchanging the consumer sh1t, and hope to have a plan very very soon. Appreciate you @itNGO !!!
  5. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    @itNGO would it be too much if I ask you to run the same command on your cluster so we can compare?
  6. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Many thanks for the time and effort spend to share ideas and brains ;)
  7. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Again, I am not confident in reading/interpreting this data well, but I think the "improvement" is a joke. Let me add another element to the comparison. This is same VM, the disk this time is on local-zfs disk (the boot disk, zfs mirror, Micron SSD) Dual DC ceph Single DC ceph Local-zfs disk...
  8. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Also to be compete, I need to share the network performance within the DC 100 packets transmitted, 100 received, 0% packet loss, time 99873ms rtt min/avg/max/mdev = 0.039/0.118/0.216/0.040 ms root@pve11:~# iperf3 -c 10.10.12.12 Connecting to host 10.10.12.12, port 5201 [ 5] local 10.10.12.11...
  9. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Here comes the results... both tests are ran from same VM the disk (sdb) resigns on ceph. The fio command I've used (took it from the ceph benchmark test performed by the proxmox team - pinned on top of the forum) fio --ioengine=libaio --filename=/dev/sdb --direct=1 --sync=1 --rw=write --bs=4K...
  10. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    I am on it... will post my findings as soon as possible
  11. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Appreciate you for taking the time, I really do! root@pve11:~# cat /etc/ceph/ceph.conf [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.10.12.11/24 fsid =...
  12. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Thanks for the reply! I was expecting the answer with "use enterprise grade SSD/NVME". And yes, the corosync, ceph and VM networks are segregated, 10GBe is available and is far from being saturated. We have 40TB pool (24 drives) which is 11% utilized and the load we generate is very low from my...
  13. hepo

    [SOLVED] Ceph (stretched cluster) performance troubleshooting

    Hi team, community, Need some help/guidance/know-how on how to proceed with troubleshooting and resolving the ceph performance issues we are currently observing. The setup The cluster is stretched primarily across 2 datacenters. The network between the DCs is 10GB/s, the latency is 1.1ms RTT -...
  14. hepo

    Very Slow Performance after Upgrade to Proxmox 7 and Ceph Pacific

    We have experienced this reboot 2 time over the last two days... trying to move vm disk out of ceph to local disk to troubleshoot ceph performance issue.
  15. hepo

    Proxmox Backup Server 2.0 released!

    Hey all, just updated to 2.0 and noticed that the VLAN option is gone, nor I can update the existing VLAN that I have Am I missing some component?
  16. hepo

    "ceph : command not allowed" in syslog

    Thanks for the response, nvme-cli already installed and can confirm the errors are gone.
  17. hepo

    "ceph : command not allowed" in syslog

    After recent upgrade to PVE7 and Ceph Pacific we started receiving emails with the same errors. After checking the logs we have concluded that the issues was definitely there prior to the upgrade. Jul 10 00:04:30 pve21 sudo: pam_unix(sudo:auth): auth could not identify password for [ceph] Jul...
  18. hepo

    Stretched Cluster (dual DC) with Ceph - disaster recovery

    Just coming back on this to say that the statement above is correct. We were not able to modify the monmap, plus this action creates much bigger risk for the cluster once restored. Instead we will place another node in 3rd datacenter that will ensure quorum is maintained for both PVE and Ceph...
  19. hepo

    One common additional disk for many vm

    I really have no further suggestions for you. I would ditch the UCS and try openmediavault or TrueNAS Core