Proxmox VE 6.0 beta released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jul 4, 2019.

  1. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,251
    Likes Received:
    179
    FYI, those packages are already out and publicly available through the pvetest package repository.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  2. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,251
    Likes Received:
    179
    I mean, in general we give no guarantee for a beta, so if you put important service on it and something breaks just due to beta status one cannot really complain.
    But that as a general warning, for now the single thing which could break, AFAICT, is a a small set of live migration VMs which set the (non-default) "q35" machine type and did a fresh start on a PVE 6 Beta. If they got migrated from 5 to 6 Beta they'll work, it's really just the ones freshly started on the Beta with q35 machine type.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. kakohari

    kakohari New Member

    Joined:
    Jul 11, 2019
    Messages:
    4
    Likes Received:
    0
    Is there a set of tests which I should run on an installed beta to provide feedback? Its not a problem if thinks break, its a test machine.
     
  4. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,251
    Likes Received:
    179
    The can co-exist but won't see each other, so no.

    For a seamless upgrade? There are a few possibilities, but one working could be:

    * upgrade both to the "corosync-3" and then to Proxmox VE 6 (once it's released as stable)
    * move all VMs over to one node
    * re-install one node
    * set "pvecm expected 1" on the other node an do a "pvecm delnode OLD-NOW-REINSTALLED-NODE" of
    * (re)-join the new installed node
    * move all VMs to the new node
    * do the same as above for the other node, if wanted

    Note, backups are recommend, also you could test that with two VMs running PVE.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    #84 t.lamprecht, Jul 12, 2019
    Last edited: Jul 12, 2019
  5. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,251
    Likes Received:
    179
    There's no full end-to-end test suite yet publicly available, no. But simply playing around and trying out stuff, eventually something not yet used, is already helping much.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. morph027

    morph027 Active Member

    Joined:
    Mar 22, 2013
    Messages:
    413
    Likes Received:
    51
    So far everything works fine for my upgraded single node.

    Will get hands on a brandnew AMD Eypc Server soon where i will try do run the beta.
     
  7. yfdoor

    yfdoor New Member

    Joined:
    Jul 12, 2019
    Messages:
    4
    Likes Received:
    0
    th
    thanks, so could you pls help to advice when the next ISO will be released for 6.0 as your team plan?
     
  8. kakohari

    kakohari New Member

    Joined:
    Jul 11, 2019
    Messages:
    4
    Likes Received:
    0
    "It is ready when it is ready" seems to be the answer... ;)
    As there seem to be no major issues one might think of something like "there is a chance that within a month the final ISO might be uploaded" :D
     
  9. Gaia

    Gaia Member

    Joined:
    May 18, 2019
    Messages:
    45
    Likes Received:
    0
    I had been running what I believe was 6.0.1 and ran a dist-upgrade today. I am now on proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)

    There is a delay during boot which wasn't there before:

    [​IMG]

    "A start job is running..." goes on for a little over a minute. When it finally moves from this stage the box beeps (3 times, I think)

    Not a big deal, but is this something in the current version OR in my machine? The only change was that I disabled HPET in the BIOS (for the same reason noted here, whereas before I just used "tsc=reliable" as kernel parameter (BTW TSC is truly reliable, HPET isn't)
     
  10. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,251
    Likes Received:
    179
    I'd guess it's a combination of both, but that probably does not help you :) It'd be interesting to know why udev now needs a while to settle things... A look at "dmesg" output during that time and maybe checking
    Code:
    systemd-analyze blame
    for the case that systemd-udev-settle.service just needs to wait on something else..

    Maybe also try to revert the HPET change back, just temporarily to see if that'd would solve this issue.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  11. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,251
    Likes Received:
    179
    Or some extra (custom?) udev rules?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. Gaia

    Gaia Member

    Joined:
    May 18, 2019
    Messages:
    45
    Likes Received:
    0
    Code:
    # systemd-analyze blame
         1min 8.596s systemd-udev-settle.service
         1min 8.567s ifupdown-pre.service
              2.636s lvm2-pvscan@259:0.service
              2.127s pvebanner.service
              1.865s pvedaemon.service
              1.761s pve-cluster.service
              1.644s dev-md0.device
              1.616s systemd-modules-load.service
              1.589s lvm2-monitor.service
              1.277s pvestatd.service
              1.268s pve-firewall.service
              1.170s zfs-import-cache.service
              1.139s smartd.service
              1.047s rrdcached.service
    not a single custom udev rule

    Code:
    # dmesg | egrep 'ifupdown|udev|helper|wait'
    [    0.256842] process: using mwait in idle threads
    I will try enabling HPET again
     
  13. maxprox

    maxprox Member
    Proxmox Subscriber

    Joined:
    Aug 23, 2011
    Messages:
    304
    Likes Received:
    12
  14. Gaia

    Gaia Member

    Joined:
    May 18, 2019
    Messages:
    45
    Likes Received:
    0
  15. vanes

    vanes New Member

    Joined:
    Nov 23, 2018
    Messages:
    5
    Likes Received:
    0
    I am trying to limit zfs memory usage on PVE6 beta using this manual https://pve.proxmox.com/wiki/ZFS_on_Linux
    Added "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then run "update-initramfs -u" then reboot.
    After reboot i run "arcstat" and see:
    Code:
        time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c 
    12:40:56     0     0      0     0    0     0    0     0    0   678M  3.7G
    My setup is pve6 latest beta root on zfs (raid10) with UEFI boot.
    Need some help!
     
  16. kukachik

    kukachik New Member

    Joined:
    Jul 13, 2019
    Messages:
    2
    Likes Received:
    0
    Got problem with openvswitch. After reboot i didnt get network

    In dmesg
    Code:
    Jul 13 15:51:48 node1 systemd[1]: ifupdown-pre.service: Main process exited, code=exited, status=1/FAILURE
    Jul 13 15:51:48 node1 systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.
    Jul 13 15:51:48 node1 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.
    Jul 13 15:51:48 node1 systemd[1]: Dependency failed for Raise network interfaces.
    Jul 13 15:51:48 node1 systemd[1]: networking.service: Job networking.service/start failed with result 'dependency'.
    Jul 13 15:51:48 node1 systemd[1]: systemd-udev-settle.service: Main process exited, code=exited, status=1/FAILURE
    Jul 13 15:51:48 node1 systemd[1]: systemd-udev-settle.service: Failed with result 'exit-code'.
    Jul 13 15:51:48 node1 systemd[1]: Failed to start udev Wait for Complete Device Initialization.
    My interfaces
    Code:
    auto lo
    iface lo inet loopback
    
    iface eno2 inet manual
    
    allow-vmbr0 eno1
    iface eno1 inet manual
            ovs_type OVSPort
            ovs_bridge vmbr0
    
    auto vmbr0
    iface vmbr0 inet manual
            ovs_type OVSBridge
            ovs_ports eno1 vlan148
    
    allow-vmbr0 vlan148
    iface vlan148 inet static
            address  10.250.100.21
            netmask  255.255.255.0
            gateway  10.250.100.253
            ovs_type OVSIntPort
            ovs_bridge vmbr0
    
    If i login after reboot and do
    Code:
    ifup vmbr0
    
    network start working
     
  17. Nemesiz

    Nemesiz Active Member

    Joined:
    Jan 16, 2009
    Messages:
    670
    Likes Received:
    42
    You can change ARC size live with:

    #echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_max

    and

    #echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_min

    if you want fixed ARC size. I use /etc/rc.local for custom tuning the system.at startup.
     
    Gaia likes this.
  18. kukachik

    kukachik New Member

    Joined:
    Jul 13, 2019
    Messages:
    2
    Likes Received:
    0
    i found ipmi call traces in log

    Code:
    [  136.820241] openvswitch: Open vSwitch switching datapath
    [  137.419453] bpfilter: Loaded bpfilter_umh pid 999
    [  141.378328] sctp: Hash tables configured (bind 512/512)
    [  148.469395] L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and deleted  link
    [  243.111684] INFO: task kworker/9:1:140 blocked for more than 120 seconds.
    [  243.111722]       Tainted: P          IO      5.0.15-1-pve #1
    [  243.111744] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    [  243.111773] kworker/9:1     D    0   140      2 0x80000000
    [  243.111784] Workqueue: events redo_bmc_reg [ipmi_msghandler]
    [  243.111786] Call Trace:
    [  243.111796]  __schedule+0x2d4/0x870
    [  243.111798]  schedule+0x2c/0x70
    [  243.111800]  schedule_preempt_disabled+0xe/0x10
    [  243.111803]  __mutex_lock.isra.10+0x2e4/0x4c0
    [  243.111806]  ? __switch_to_asm+0x34/0x70
    [  243.111808]  ? __switch_to_asm+0x40/0x70
    [  243.111810]  __mutex_lock_slowpath+0x13/0x20
    [  243.111811]  mutex_lock+0x2c/0x30
    [  243.111814]  __bmc_get_device_id+0x65/0xaf0 [ipmi_msghandler]
    [  243.111815]  ? __switch_to_asm+0x40/0x70
    [  243.111817]  ? __switch_to_asm+0x34/0x70
    [  243.111818]  ? __switch_to_asm+0x40/0x70
    [  243.111819]  ? __switch_to_asm+0x34/0x70
    [  243.111824]  ? __switch_to+0x96/0x4e0
    [  243.111825]  ? __switch_to_asm+0x40/0x70
    [  243.111826]  ? __switch_to_asm+0x34/0x70
    [  243.111827]  ? __switch_to_asm+0x40/0x70
    [  243.111830]  redo_bmc_reg+0x54/0x60 [ipmi_msghandler]
    [  243.111835]  process_one_work+0x20f/0x410
    [  243.111836]  worker_thread+0x34/0x400
    [  243.111840]  kthread+0x120/0x140
    [  243.111842]  ? process_one_work+0x410/0x410
    [  243.111843]  ? __kthread_parkme+0x70/0x70
    [  243.111845]  ret_from_fork+0x35/0x40
    [  243.111870] INFO: task systemd-udevd:685 blocked for more than 120 seconds.
    [  243.111896]       Tainted: P          IO      5.0.15-1-pve #1
    [  243.111918] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    [  243.111946] systemd-udevd   D    0   685    587 0x80000104
    
    My system is very old Dell C6100. It was quite stable before update. In test VM update from 5.4 to 6 with openvswitch goes flawlessly
     
  19. Gaia

    Gaia Member

    Joined:
    May 18, 2019
    Messages:
    45
    Likes Received:
    0
    Reverting the HPET back to enabled did not help. And it is 16 (15 short + 1 long) beeps (I had to record it and slow it down to 20% to be able to count it).

    I think found the culprit: "systemctl status udev" shows "Spawned process '/usr/sbin/rear udev' [708] is taking longer than 59s to complete" and "sdd2: Worker [496] processing SEQNUM=4065 is taking a long time"

    So running

    Code:
    # rear udev
    
    Cannot include keyboard mappings (no keymaps default directory '')
    takes about a minute then beeps 16 times. Nailed it. thanks!
     
    #99 Gaia, Jul 13, 2019
    Last edited: Jul 13, 2019
  20. Ryzen3600

    Ryzen3600 New Member

    Joined:
    Sunday
    Messages:
    3
    Likes Received:
    0
    Is it possible to get newer kernels for new AMD Zen 2 processor support? even if its an unofficial kernel build. Or can we install the ubuntu kernel?

    If all that is not possible what modules do I need to enable to build my own kernel?
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice