Recent content by michaeljk

  1. M

    Docker läuft im LXC unter Proxmox 7 nicht mehr

    Hi, vielleicht helfen dir die folgenden Infos weiter: https://www.hardwareluxx.de/community/threads/proxmox-stammtisch.1039627/post-28522188 https://github.com/docker/for-linux/issues/711
  2. M

    Totalausfall ZFS

    Durch die aktivierten Caches dürfte man aber einen guten Teil der Sicherheit verlieren, den ZFS eigentlich bieten soll. Sofern der Controller meldet, dass die Daten bereits weggeschrieben wurden, sich aber noch in irgendeinem Cache im System befinden, so wäre dies bei einem Absturz oder...
  3. M

    Cluster immer noch auf 2ms Latenz limitiert?

    Ceph über WAN mit Gigabit-Anbindung? Halte ich für keine gute Idee, selbst im lokalen LAN sollte das Storage-Netzwerk schneller angebunden sein :) Was genau hast du denn vor, weil du die Nodes über mehrere Rechenzentren hinweg auftrennen möchtest? Ausfallsicherheit / Backup-Standort? Dann würde...
  4. M

    NVMe ZFS RAID 1 / Proxmox auf SSD HW Raid

    @thoand: Booten von einem ZFS-Mirror (RAID1) von NVMe ist ab Proxmox 6 kein Problem mehr, selbst getestet, funktioniert. In Verbindung mit ZFS solltest du allerdings noch auf folgende Dinge achten: - Keinen Hardware-RAID Controller benutzen - Je nach Anzahl / RAM-Bedarf der virtuellen Maschinen...
  5. M

    Proxmox ZFS auf EX61-NVMe - Failed to create ZFS Pool (zfs-RAID1)

    Kurze Info: Heute die Installation auf einem EX42-NVME mit der Proxmox 6 Beta getestet, dort funktioniert der Bootvorgang dann völlig problemlos. Die Performance ist allerdings niedriger als erwartet: pveperf /rpool/data CPU BOGOMIPS: 57600.00 REGEX/SECOND: 4497862 HD SIZE...
  6. M

    After Update to 5.3-5: Windows 10 VM slow

    I just upgraded the Proxmox host from 5.2 to 5.3-5. Mostly standard configuration on a Dell T20 machine, ZFS with mirrored harddisks. Update worked fine, except that the only Windows 10 VM is now incredibly slow as soon as it gets to the login screen. There's no high CPU usage or I/O, it just...
  7. M

    HP DL360 G8 + HPE Smart Array P420i Controller

    We used some Gen6 machines on our old cluster some time ago, they had the some P420i controller - no problems, worked perfectly with SAS and SATA drives with Raid 10 configured.
  8. M

    zfs: finding the bottleneck

    Just a short reply for this issue: KSM_TUNED didn't fixed it, so I gave up on this and moved the virtual machines to a second host (HP Gen9 with hardware raid and LVM-thin installed). Now everything works as expected, I/O is almost always below 1%. I guess that there is some sort of hardware...
  9. M

    zfs: finding the bottleneck

    cat /etc/default/ksmtuned # Defaults for ksmtuned initscript # sourced by /etc/init.d/ksmtuned # installed at /etc/default/ksmtuned by the maintainer scripts # # This is a POSIX shell fragment # # start ksmtuned at boot [yes|no] START=yes ps aux |grep ksm root 133 0.0 0.0 0...
  10. M

    zfs: finding the bottleneck

    cat /sys/kernel/mm/ksm/pages_sharing 0 So it seems that KSM is not beeing used at the moment because there's enough free memory available?
  11. M

    zfs: finding the bottleneck

    top - 15:20:45 up 3 days, 1:27, 2 users, load average: 5.81, 5.35, 5.20 Tasks: 1866 total, 1 running, 1865 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.3 us, 1.3 sy, 0.0 ni, 81.0 id, 14.4 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 13191573+total, 13136350+used, 552236 free, 50927776...
  12. M

    zfs: finding the bottleneck

    In addition, io-wait from today: 06:44:41 PM CPU %user %nice %system %iowait %steal %idle 07:44:43 PM all 5.61 0.00 1.97 0.44 0.00 91.98 08:44:43 PM all 5.22 0.00 1.99 0.48 0.00 92.31 09:44:43 PM all...
  13. M

    zfs: finding the bottleneck

    After 24 hours, the problem is here again. Reboot was on friday, 15:30 and all looked good: Fri Sep 16 15:43:09 CEST 2016 total used free shared buffers cached Mem: 131915736 77955020 53960716 54944 22313508 102228 -/+ buffers/cache...
  14. M

    zfs: finding the bottleneck

    No entries in dmesg and CPU usage looked good - the high load value results mainly because of the high I/O wait. So it seems that at some time there's no more free memory. Currently it looks like this: uptime 17:17:21 up 3:24, 4 users, load average: 1.72, 1.67, 1.69 arcstat.py time...
  15. M

    zfs: finding the bottleneck

    We watched the new system for some more days. After a fresh reboot, iostat shows between 3-10% usage for each disk, overall 0.5-0.7 wait in top. These values increase over time, today 6 out of 8 HDDs showed an usage of 100% (!), the other 2 had 30-40%, overall 15-30 wait: (Reboot on 13:50...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!