Search results

  1. K

    [SOLVED] ceph clock skew issue - no way out?

    I'm using the local ntp servers from my Datacenter-Provider. Again, with the same ntp servers and with ntpd on nodes, things got worse and I can't reproduce this behavior ond other hardware. So, my guess is an unstable clocksource (tsc) on all 4 nodes...? Does anybody has experiences about...
  2. K

    [SOLVED] ceph clock skew issue - no way out?

    Hi, there are plenty of posts about clock skew issues within this forum. I'm affected too. So, I've tried different actions to get 4 Nodes with identical hardware permanently in sync with no success. Even this post...
  3. K

    [SOLVED] HA mass migration

    Worked for me, thanks!
  4. K

    [SOLVED] HA mass migration

    Hi, in a HA enviroment, a mass migration don´t honor the parallel jobs setting in GUI. This is really dangerus, because parallel live migration of >40 VMs saturated the cluster network which ended up in a dead cluster. Is there a way to avoid this scenario like restrictions of parallel jobs...
  5. K

    Proxmox node immediately shuts down after boot

    Without further information this would be a shot in the dark... ...maybe a node as a part of a cluster which lost communication with the other nodes. If so, this would be expected behavior. Check your network, logs etc.
  6. K

    Bad NFS Performance

    Maybe switching from NFS to iSCSI with ZVOLs would be also an alternative. Anyway, converting the image away from vmdk is mandatory imho. You need high I/O for Exchange, SQL etc. Take a look on your zpool setup. Prefer striped mirrors over RAIDZ whatever. Also get a NVMe for ZIL and L2ARC. Do...
  7. K

    Bad NFS Performance

    Bad idea.. Convert it to raw. Not even better... I'd take async as a mount option in nfs(-client, aka Proxmox). On zfs side also switch on compression (lz4). Get a NVMe-SSD for ZIL.
  8. K

    Bad NFS Performance

    I think it's safe to mount nfs async, as the zfs is doing its job with zil. Do you have chosen raw for the image format? Don't use qcow2 on zfs via nfs!
  9. K

    Ceph PGs <75 or >200 per OSD ist just a warning?

    I'm aware of the calculation. Unfortunately the pg_num of 1024 has been set by someone else. So, this is the situation I'm dealing with.
  10. K

    Ceph PGs <75 or >200 per OSD ist just a warning?

    Hi, I'm going to migrate our cluster from HDDs to SSDs and from filestore with SSD-journal to bluestore. Not a big deal with plenty of time... Unfortunately the pg_num was set to 1024 with 18 OSDs. Afaik this is not a good value, because if one node with 6 OSDs fails, the cluster will be...
  11. K

    ZFS RAID Types and Speed

    No, bad IO performance on writes and low endurance. Example fpr NvME: Seagate Nytro 5000 Mixed-Workload 1.5DWPD 400GB, M.2 (XP400HE30002) Example for PCIe: Intel SSD DC P3700 400GB, PCIe 3.0 x4 (SSDPEDMD400G401) Example for SATA: Intel SSD DC S4600 480GB, 2.5", SATA (SSDSC2KG480G701) NvME/PCIe...
  12. K

    ZFS RAID Types and Speed

    Afaik a M.2 NvME SSD will work with a PCIe-Adaptor even on an old PCIe 1.0 Boards, but with less performance... Anyway, you schould watch out for an Enterprise-Grade SSD with Power-Loss-Protection.
  13. K

    Ceph GUI Communication

    Just create a symlink /etc/ceph/ceph.conf poiting to /etc/pve/ceph.conf
  14. K

    Free space

    I'm aware of this. If you don't like it, feel free to change it.
  15. K

    Free space

    A zpool is immediately mounted after creation per default. This is NOT bad practice, this is a feature of zfs.
  16. K

    ZFS RAID Types and Speed

    And this would be the last 5 seconds of writes in worst case. Data that already has been written is not affected.
  17. K

    Shared Storage for a PVE Cluster

    That's good to know. Now I need time and hardware to test performance and reliability... ...any existing use-cases are welcome.
  18. K

    Shared Storage for a PVE Cluster

    Thumbs up, but I'd stay on iscsi. I'd also implement the usage of write-intent bitmaps to reduce resync time in case of one iscsi node down. This would be a nice alternative solution for HA storage. https://raid.wiki.kernel.org/index.php/Write-intent_bitmap
  19. K

    Shared Storage for a PVE Cluster

    That's what I'm thinking about over a year ago. Did you have any expiriences on this construction?
  20. K

    Proxmox VE Ceph Benchmark 2018/02

    How did you do that? Confusing: I wonder if the Micron SSDs have been setup as seperate wal/db devices for the Seagate OSDs. - or - The Micron SSDs have been setup as OSDs in a seperate Pool? Regards