Search results

  1. R

    4.15 based test kernel for PVE 5.x available

    I applied to my quorum node; drbd9 syncronized immediately. Tomorrow I will update remaining nodes and will report. Thanks, rob
  2. R

    4.15 based test kernel for PVE 5.x available

    Great. Ok, many thanks, have a nice we. rob
  3. R

    4.15 based test kernel for PVE 5.x available

    A question: Under BSD the man page of igb suggests some tunables, and a couple are jumbo frame related: http://manpages.ubuntu.com/manpages/trusty/man4/if_igb.4freebsd.html kern.ipc.nmbclusters The maximum number of mbuf clusters allowed. If the system has more than...
  4. R

    4.15 based test kernel for PVE 5.x available

    Another note: In the least powerful node (used only as quorum and storage redundancy) I had to add a fourth interface, which uses "jme" driver. This one is in bridge with an "igb" one to form the "drbdbr" interface. Will try to change this and put two igb in the bridge and see if something...
  5. R

    4.15 based test kernel for PVE 5.x available

    As a further note, I must say that the syncronization problem is very subtle. Most resources syncronize immediately, but two or three are steadily refusing: syncronization starts, goes till 100%, but never ends; in the syslog of the source node I see (quoting from my previous message): rob
  6. R

    4.15 based test kernel for PVE 5.x available

    I am under pve-no-subscription, and dist-upgrading quite often, so i passed form 4.13 and then through 4.15.17-1 4.15.17-2 and 4.15.17-3 . Only latest give me unresolvable syncronization problems under drbd9 with "mtu 9000" in place. Aaah right! Anyway at the moment I kept 4.15.17-3 installed...
  7. R

    4.15 based test kernel for PVE 5.x available

    Hello Thomas, nice to meet you here ... I think also that @Alwin overlooked it, after all that post seems duplicated a lot :) . Regarding Intel drivers, I had absolutely nothing to blame igb for till now; 4.15.17-2 (with in-tree igb driver) works flawlessly, and I have the neat feeling that...
  8. R

    igb driver on latest kernel 4.15.17-3-pve - net connections over jumbo frames anomalies

    Sorry, but I do not want to dedicate further time (involving reboot of all nodes) on what seems to me clearly a driver issue. I will follow the 4.15 thread. Please consider this one closed. Thanks to all, rob
  9. R

    4.15 based test kernel for PVE 5.x available

    @Alwin Ehm, I would like to note respectfully that this is not double posting. I re-posted here because I thought was less confusing, not more. I have stated clearly (I guess) in the former thread (as well) that my issue is more appropriate here.
  10. R

    igb driver on latest kernel 4.15.17-3-pve - net connections over jumbo frames anomalies

    I forgot to mention that I am using the community repo, and 4.15 series is still experimental; there is a dedicated thread: https://forum.proxmox.com/threads/4-15-based-test-kernel-for-pve-5-x-available.42097/page-6#post-212557
  11. R

    4.15 based test kernel for PVE 5.x available

    Reposting here https://forum.proxmox.com/threads/igb-driver-on-latest-kernel-4-15-17-3-pve-net-connections-over-jumbo-frames-anomalies.44555/ I noticed this thread just now, sorry for the noise ... ======== I have recently routinely updated my 3 node pve 5.2 cluster setup, installing new...
  12. R

    igb driver on latest kernel 4.15.17-3-pve - net connections over jumbo frames anomalies

    I already tried to lower mtu to 1500: solves connection problems, but performance penalty is unbearable. I know that i can boot with a previous kernel, thanks for the hint: I was'nt aware of "grub-set-default" command; very handy. My opinion is that in many situations a driver that does not...
  13. R

    igb driver on latest kernel 4.15.17-3-pve - net connections over jumbo frames anomalies

    I have recently routinely updated my 3 node pve 5.2 cluster setup, installing new pve-kernel-4.15.17-3-pve . I started to have problems in syncronizing drbd resources over a link with jumbo frames enabled: Jun 14 08:59:26 pve1 kernel: [40906.042440] drbd vm-102-disk-1/0 drbd103 pve3: Began...
  14. R

    [SOLVED] xterm.js not enabled in GUI for kvm vms after upgrade to pve-manager 5.1-46

    Solved in qemu-server (5.0-22) : changelog: ... * add serial:1 to vm-status when config has a serial device configured ...
  15. R

    [SOLVED] xterm.js not enabled in GUI for kvm vms after upgrade to pve-manager 5.1-46

    Just analyzed and solution in the works ... (see bugzilla for details). Very good job, Proxmox! rob
  16. R

    [SOLVED] xterm.js not enabled in GUI for kvm vms after upgrade to pve-manager 5.1-46

    After upgrading to pve-manager (5.1-46) stable the xterm.js console is not enabled for kvm vms with a serial port configured. for container vms xterm.js is shown in GUI qm terminal <vmid> works js console works for pve host and container vms I reported this in bugzilla as well...
  17. R

    PVE 5 - lvs reports reports wrong %Snap size for thin LV

    Having studied a little how thin provisioning works, it seems to me that drbdmanage thin_lv plug in code is wrong. No need to sum Snap%, the Data% of the pool accounts for all allocated space. Sorry for the noise. rob
  18. R

    PVE 5 - lvs reports reports wrong %Snap size for thin LV

    I opened a bug for this in bugzilla just now. rob
  19. R

    PVE 5 - lvs reports reports wrong %Snap size for thin LV

    Hello, I have a freshly upgraded pve 5 installation, and I think there's something wrong in lvs command output for thin lv. When i call lvs command with option for showing snapshot sizes , the output reports Snap% equal to Data% for thin provisioned volumes (last two columns). I have no...
  20. R

    4.4: Error opening spice console on other node

    In fresh upgraded cluster to 4.4-1/eb2d6f1e, if i try to open a SPICE console on a vm residing on cluster nodes different from the one i'm logged in, I receive an "HTTP proxy connection failed: 401 invalid ticket" error. rob