Search results

  1. K

    command IOTOP = ksoftirqd/0 99%

    Problem still there with Linux 4.13.13-1-pve #1 SMP PVE 4.13.13-31. This is what happens cloning a VM over iSCSI (while in PVE 4.x it just takes 15% at most). I will go back to PVE 4.x until someone will push a fix for this mess...
  2. K

    Fresh install of PVE 5.1, IO delay of death

    There seems to be a whole bunch of treads describing this problem with PVE 5.X in the forum... :(
  3. K

    command IOTOP = ksoftirqd/0 99%

    I've not tryed upgrading yet but I can confirm that in this "old" machine the problem is NOT present: pveversion --v proxmox-ve: 4.4-79 (running kernel: 4.4.35-2-pve) pve-manager: 4.4-12 (running version: 4.4-12/e71b7a74) pve-kernel-4.4.35-1-pve: 4.4.35-77 pve-kernel-4.4.35-2-pve: 4.4.35-79...
  4. K

    command IOTOP = ksoftirqd/0 99%

    Please help... this is very burdensome :(
  5. K

    command IOTOP = ksoftirqd/0 99%

    I would be grateful if someone can find a solution because I would like to keep using version 5.x :(
  6. K

    command IOTOP = ksoftirqd/0 99%

    I confirm this problem with PVE 5.1-36. I have a couple of VM's shared via iSCSI and I/O delay is around 10 percent with CPU usage at 40 percent... I confirm having couple of instances of ksoftirqd processes at 99% and with PVE 4.x i/O delay was almost zero. Plz help :(
  7. K

    [SOLVED] Estremely poor performance I/O performance after upgrade

    Title should have been "Estremely poor performance" when you don't have enough sleep. I was using a 32bit OS instead of 64. Fml.
  8. K

    [SOLVED] Estremely poor performance I/O performance after upgrade

    Hi guys, I am struggling since 4.4-12 to 5.1-36 upgrade (to be fair it's a new deploy) due to terrible I/O performances via iSCSI (but after some testing also NFS seems affected). The problem doesn't always show up, but I have been able to reproduce it in this manner: VM just booted up with...
  9. K

    pvestatd storage is not online

    Thank you fabian, I will try that asap.
  10. K

    pvestatd storage is not online

    Fair enough. I think I can test this sniffing the traffic in normal conditions and when this error occours to see the difference of what happens in the network. Btw, is it possible to see an output for all the responses for this online check, that is to say also the responses from the NFS server...
  11. K

    pvestatd storage is not online

    Wait, are you saying that "status update time (8.155 seconds)" means that instead of replying in >2 seconds, the server took over 8 seconds to do so?
  12. K

    pvestatd storage is not online

    Thank yuo for this information, this will be good to make some troubleshooting, expecially because the VLAN where the NFS server resides, is supposed to be mostly idle, so I'm very curious to understand why the server is not responding fast enough. You're right Manu, I should have started this...
  13. K

    pvestatd storage is not online

    Hello, I'm receiving this error message on /var/log/syslog of all nodes of my cluster: This event repeats in brief or long intervals (1-30 minutes) and might happen every couple of days. I believe it's impossible that the network is to be blamed. At most, this could be a problem on the NAS...
  14. K

    Node rebooting without apparent reason

    Thank you for your reply. I'm not sure I have ZFS. I would say no, because I'm supposed to only have logical volumes. I do use ZFS, but on the NAS :) I might have figured out what was happening btw. The problem seemed to be cause by some misconfiguration on the internal NTP server. For some...
  15. K

    Node died but VM failed to migrate with HA

    Are you referring to the NIC used to create the cluster, for example pvecm add 192.168.50.155 ?
  16. K

    Node died but VM failed to migrate with HA

    Thank you. Is shutting down all the network ports on the switch related to a node a good test for this purpose? I assume that a test for fencing is not actually unplugging the power cord of a server :)
  17. K

    Node died but VM failed to migrate with HA

    The cluster is supposed to use software watchdog and fencing, that is to say, I assumed it would work out of the box with no configuration. Am I missing something?
  18. K

    Node died but VM failed to migrate with HA

    Hello, I have a three nodes cluster and all seems to be running ok. All VM's are connected with iSCSI shared storage and I can live migrate from a node to another without problems in few milliseconds. However, the other day a node rebooted without apparent reason and the only VM managed by HA...
  19. K

    Installation Problem

    Have you tried running apt-get dist-upgrade ?