Proxmox VE 5.4 released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Apr 11, 2019.

  1. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,568
    Likes Received:
    412
    @MoxProxxer , based on your other threads you mixed up a lot, e.g. you got Debian Stretch and updated to Buster and then back to Stretch - all this is not really possible and the cause of your issues.

    I highly recommend a clean Strech based installation without any Buster packages.

    And please open a new thread as your issues are not related to the new release announcements.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  2. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,290
    Likes Received:
    184
    Downgrade?? Your installation seems to be a bit off in general? Have you by any chance a newer Debian (testing/buster) release already installed?
    Your system package state seems at least like it, and if this is a production system I can only recommend to install a clean PVE, at best from our ISO, if you run a, so called "Franken Debian", you're running a ticking time bomb..
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. T.Herrmann

    T.Herrmann New Member

    Joined:
    Aug 10, 2018
    Messages:
    8
    Likes Received:
    0

    Thanks to the Proxmox Team. !!!


    Questions: Is there a plan for an improved usability of the ZFS Storage plugin. We are waiting for a option ZFS dataset creation and snapshot management by this nice GUI plugin.

    Is this maybe part of Proxmox VE version 6.0 incl. ZFS 0.8 with trim and encryption features?
     
  4. hanh_bk

    hanh_bk New Member

    Joined:
    Mar 10, 2019
    Messages:
    2
    Likes Received:
    0
    I am very happy to hear that you have released a new version. However, what I want most is that you support CEPH version of Mimic not yet available. I look forward to supporting this release soon.
     
  5. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,290
    Likes Received:
    184
    Once ZFS 0.8 is released we'll work to integrate it, and it's new stable features, into PVE in a timely manner..
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    Lucio Magini likes this.
  6. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,290
    Likes Received:
    184
    1. An upgrade from existing Luminous cluster to Mimic Nautilus is possible, but needs a bit of work, we're onto it.
    2. As the Ceph Mimic Nautilus release uses new C++ features only available in quite new compilers, which is not the case for those supported by Debian Stretch (gcc 6). As the compiler and it's libc are still the fundamental base of any modern Linux Distribution this is not easily done by cross compiling, or backporting new compilers, one is guaranteed to run into troubles there, which for a tech used to provided a highly available and reliable storage backing like ceph is a no-go for us.

    But the next major release, Proxmox VE 6, based upon Debian Buster, has this required support for those new shiny C++ Language features, and we can and will provide a clean and stable solution with Ceph Mimic Nautilus there.

    Edit: I actually talked about Ceph Nautilus planned for 6.0, sorry.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    #26 t.lamprecht, Apr 14, 2019
    Last edited: Apr 15, 2019
  7. hanh_bk

    hanh_bk New Member

    Joined:
    Mar 10, 2019
    Messages:
    2
    Likes Received:
    0
    Thanks for your answer.
    Can you tell me, when is version 6.0 released?
     
  8. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,290
    Likes Received:
    184
    No, I'm afraid, I cannot give any date.. I mean Buster needs to come out first, I'd guess :)
    But I would not wait anything out for new setups, upgrades will be possible and Proxmox VE 5.4 with Luminous is, IMO, a very good release.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. mailinglists

    mailinglists Active Member

    Joined:
    Mar 14, 2012
    Messages:
    390
    Likes Received:
    34
    Just wanted to say thank you and good job!
    Looking forward to test: "Suspend to disk/hibernate support for Qemu/KVM guests" :)
    I would allow me to do upgrades without live migration. :)
     
  10. snowpoke

    snowpoke New Member

    Joined:
    Nov 15, 2014
    Messages:
    12
    Likes Received:
    0
    Hi, thanks for new release!

    Unfortunately, after upgrade to 5.4 from 5.3 there is an issue with accessing options of CD-ROM.
    Double clicking on cdrom doesn't trigger .iso selection modal/popup.
    After downgrading pve-manager package to 5.3-12 it's again possible.
     
  11. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,290
    Likes Received:
    184
    The CD-ROM edit window from VMs work here just fine in 5.4, just re-tested with Firefox 66 and Chromium 73, can you retry and ensure that the browser cache is cleared?
    If it then still is an issue please open a new threads with additional infos like your used browser and its version, `pveversion -v` output, do you have a touch screen, all info that could be possibly relevant
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. Whatever

    Whatever Member

    Joined:
    Nov 19, 2012
    Messages:
    199
    Likes Received:
    5
  13. sb-jw

    sb-jw Active Member

    Joined:
    Jan 23, 2018
    Messages:
    551
    Likes Received:
    49
    I see it a bit different, because Croit (a great CEPH Deployment Tool) have this already and there was no trouble to use CEPH Mimic on Debian.
    Release Notes: https://croit.io/2018/09/23/2018-09-23-debian-mirror

    Many Users of the HCI Cluster are a little mad. A colleague is already thinking about pass through all SSDs to a Single VM to have the ability to run the newest CEPH Version and have not wait on you guys. Yes PVE is a great Tool and you right if you say it must ran stable - but other Companys get this already done and i thin Croit has many big Customer.
    Nautilus is now out and our Cluster runs on Luminous... Thats now two Version between and thats not really nice.

    For example (new in Nautilus): "Embedded Grafana Dashboards (derived from Ceph Metrics)" - i spent the last 3 days to pull some Metrics from the Cluster to Graphite and create multiple Dashboards.
     
    DerDanilo likes this.
  14. Kmgish

    Kmgish New Member
    Proxmox Subscriber

    Joined:
    May 31, 2015
    Messages:
    27
    Likes Received:
    3
    Thank you for the 5.4 release and your ongoing efforts to improve the usability of ceph as deployed by Proxmox. Can anyone comment on any improvements planned for the support of rbd-mirror configurations?

    Thanks,
    K
     
  15. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,299
    Likes Received:
    508
    see the last paragraph on that linked page for why this is not an option for PVE:

    just switching out libc might be possible for a single-use installation with a very static and small set of installed software (i.e., only Ceph), but it is not for PVE (which aims to remain compatible with Debian packages from the underlying base release).
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. harvie

    harvie Member

    Joined:
    Apr 5, 2017
    Messages:
    83
    Likes Received:
    12
    BTW why there's still qemu 2 in proxmox? there is already qemu 4.0.0...
     
  17. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,290
    Likes Received:
    184
    as we prefer stability over bleeding edge and its unavoidable possible bugs of new regressions, which is far from ideal in production.
    That said, we packaged 3.0.1 a bit ago, it's currently available through our pvetest repository.
    Further, one needs to note that qemu had two big version bumps lately in under a year, while the 2.X series was release in a span of multiple years, one shouldn't be to fixated on version numbers, IMO :)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. snowpoke

    snowpoke New Member

    Joined:
    Nov 15, 2014
    Messages:
    12
    Likes Received:
    0
    This problem still persists for me.
    My users and I, tested in on different browsers (Chrome, Chromium, Firefox) on different OS (Ubuntu 18.04, Win 10).

    Please check if you are able to reproduce it by creating new user with only "PVEVMUser" permission.
    For user with minimal set of permissions, "Edit" button it disabled for virtual CD-ROM.
    For user with admin permissions there is no such problem.

    Maybe this issue in bugtracker is related?
    https://bugzilla.proxmox.com/show_bug.cgi?id=2197
     
    #38 snowpoke, May 2, 2019
    Last edited: May 3, 2019
  19. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,290
    Likes Received:
    184
    Yes, I can now. Thanks for the pointer.

    yes, moving discussion regarding this specific issue over there to the bugzilla.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    snowpoke likes this.
  20. SteveW

    SteveW New Member

    Joined:
    May 22, 2019
    Messages:
    3
    Likes Received:
    0
    Hi,

    Installed latest version with updates.

    Server crashes every 48hrs or so also displaying the Detected Hardware Unit Hang on Intel Corporation Ethernet Connection (2) I219-LM (rev 31)

    Please advise

    Further details below.

    Linux scrypt 4.15.18-14-pve #1 SMP PVE 4.15.18-39 (Wed, 15 May 2019 06:56:23 +0200) x86_64 GNU/Linux

    proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
    pve-manager: 5.4-5 (running version: 5.4-5/c6fdb264)
    pve-kernel-4.15: 5.4-2
    pve-kernel-4.15.18-14-pve: 4.15.18-39
    pve-kernel-4.15.18-13-pve: 4.15.18-37
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: not correctly installed
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-9
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-51
    libpve-guest-common-perl: 2.0-20
    libpve-http-server-perl: 2.0-13
    libpve-storage-perl: 5.0-42
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.1.0-3
    lxcfs: 3.0.3-pve1
    novnc-pve: 1.0.0-3
    proxmox-widget-toolkit: 1.0-26
    pve-cluster: 5.0-37
    pve-container: 2.0-37
    pve-docs: 5.4-2
    pve-edk2-firmware: 1.20190312-1
    pve-firewall: 3.0-20
    pve-firmware: 2.0-6
    pve-ha-manager: 2.0-9
    pve-i18n: 1.1-4
    pve-libspice-server1: 0.14.1-2
    pve-qemu-kvm: 3.0.1-2
    pve-xtermjs: 3.12.0-1
    qemu-server: 5.0-51
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3

    00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)
    Subsystem: Fujitsu Technology Solutions Ethernet Connection (2) I219-LM
    Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 142
    Region 0: Memory at ef200000 (32-bit, non-prefetchable) [size=128K]
    Capabilities: [c8] Power Management version 3
    Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
    Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
    Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
    Address: 00000000fee00498 Data: 0000
    Capabilities: [e0] PCI Advanced Features
    AFCap: TP+ FLR+
    AFCtrl: FLR-
    AFStatus: TP-
    Kernel driver in use: e1000e
    Kernel modules: e1000e

    May 22 06:34:15 scrypt kernel: [101650.550599] e1000e 0000:00:1f.6 enp0s31f6: Detected Hardware Unit Hang:
    May 22 06:34:15 scrypt kernel: [101650.550599] TDH <6d>
    May 22 06:34:15 scrypt kernel: [101650.550599] TDT <80>
    May 22 06:34:15 scrypt kernel: [101650.550599] next_to_use <80>
    May 22 06:34:15 scrypt kernel: [101650.550599] next_to_clean <6c>
    May 22 06:34:15 scrypt kernel: [101650.550599] buffer_info[next_to_clean]:
    May 22 06:34:15 scrypt kernel: [101650.550599] time_stamp <101829e46>
    May 22 06:34:15 scrypt kernel: [101650.550599] next_to_watch <6d>
    May 22 06:34:15 scrypt kernel: [101650.550599] jiffies <101829f60>
    May 22 06:34:15 scrypt kernel: [101650.550599] next_to_watch.status <0>
    May 22 06:34:15 scrypt kernel: [101650.550599] MAC Status <80083>
    May 22 06:34:15 scrypt kernel: [101650.550599] PHY Status <796d>
    May 22 06:34:15 scrypt kernel: [101650.550599] PHY 1000BASE-T Status <7800>
    May 22 06:34:15 scrypt kernel: [101650.550599] PHY Extended Status <3000>
    May 22 06:34:15 scrypt kernel: [101650.550599] PCI Status <10>
    May 22 06:34:17 scrypt kernel: [101652.566531] e1000e 0000:00:1f.6 enp0s31f6: Detected Hardware Unit Hang:
    May 22 06:34:17 scrypt kernel: [101652.566531] TDH <6d>
    May 22 06:34:17 scrypt kernel: [101652.566531] TDT <80>
    May 22 06:34:17 scrypt kernel: [101652.566531] next_to_use <80>
    May 22 06:34:17 scrypt kernel: [101652.566531] next_to_clean <6c>
    May 22 06:34:17 scrypt kernel: [101652.566531] buffer_info[next_to_clean]:
    May 22 06:34:17 scrypt kernel: [101652.566531] time_stamp <101829e46>
    May 22 06:34:17 scrypt kernel: [101652.566531] next_to_watch <6d>
    May 22 06:34:17 scrypt kernel: [101652.566531] jiffies <10182a158>
    May 22 06:34:17 scrypt kernel: [101652.566531] next_to_watch.status <0>
    May 22 06:34:17 scrypt kernel: [101652.566531] MAC Status <80083>
    May 22 06:34:17 scrypt kernel: [101652.566531] PHY Status <796d>
    May 22 06:34:17 scrypt kernel: [101652.566531] PHY 1000BASE-T Status <7800>
    May 22 06:34:17 scrypt kernel: [101652.566531] PHY Extended Status <3000>
    May 22 06:34:17 scrypt kernel: [101652.566531] PCI Status <10>

    Looks like the old problem has returned.
    Server tends to last about 48hrs before crashing

    Settings for enp0s31f6:
    Supported ports: [ TP ]
    Supported link modes: 10baseT/Half 10baseT/Full
    100baseT/Half 100baseT/Full
    1000baseT/Full
    Supported pause frame use: No
    Supports auto-negotiation: Yes
    Advertised link modes: 10baseT/Half 10baseT/Full
    100baseT/Half 100baseT/Full
    1000baseT/Full
    Advertised pause frame use: No
    Advertised auto-negotiation: Yes
    Speed: 1000Mb/s
    Duplex: Full
    Port: Twisted Pair
    PHYAD: 1
    Transceiver: internal
    Auto-negotiation: on
    MDI-X: on (auto)
    Supports Wake-on: pumbg
    Wake-on: g
    Current message level: 0x00000007 (7)
    drv probe link
    Link detected: yes

    ethtool -K enp0s31f6 sg off tso off gro off
    Cannot get device udp-fragmentation-offload settings: Operation not supported
    Cannot get device udp-fragmentation-offload settings: Operation not supported
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice