1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Proxmox VE 2.0 beta3 released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Nov 29, 2011.

  1. martin

    martin Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    560
    Likes Received:
    46
    Hi all!

    We just released the third beta of Proxmox VE 2.0 (new ISO and also updated the repository). Just run "aptitude update && aptitude dist-upgrade" to update from a previous beta.
    A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

    New Features

    - do not activate clvmd by default (clvm is now only needed in very special cases)
    - start i18n support
    • HowTo for contributing translations will follow soon
    - fixed a lot of vzdump/snapshot related issues
    - New Kernel
    • Based on vzkernel-2.6.32-042stab044.1.src.rpm
    - Countless small bug fixes

    Overview
    http://pve.proxmox.com/wiki/Roadmap#Roadmap_for_2.x

    Documentation (work in progress)
    http://pve.proxmox.com/wiki/Category:Proxmox_VE_2.0

    Get involved (incl. links to the public git repository and bugzilla bugtracker):
    http://www.proxmox.com/products/proxmox-ve/get-involved


    Proxmox VE 2.0 beta forum
    http://forum.proxmox.com

    Download
    http://www.proxmox.com/downloads/proxmox-ve/17-iso-images

    Happy testing, waiting for feedback!

    __________________
    Best regards,

    Martin Maurer
    Proxmox VE project leader
     
    #1 martin, Nov 29, 2011
    Last edited by a moderator: Nov 29, 2011
  2. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,000
    Likes Received:
    48
    Here is the current pveversion -v for the beta3:

    Code:
    root@hp1:~# pveversion -v
    pve-manager: 2.0-12 (pve-manager/2.0/784729f4)
    running kernel: 2.6.32-6-pve
    proxmox-ve-2.6.32: 2.0-53
    pve-kernel-2.6.32-6-pve: 2.6.32-53
    lvm2: 2.02.86-1pve2
    clvm: 2.02.86-1pve2
    corosync-pve: 1.4.1-1
    openais-pve: 1.1.4-1
    libqb: 0.6.0-1
    redhat-cluster-pve: 3.1.7-1
    pve-cluster: 1.0-12
    qemu-server: 2.0-9
    pve-firmware: 1.0-13
    libpve-common-perl: 1.0-8
    libpve-access-control: 1.0-2
    libpve-storage-perl: 2.0-8
    vncterm: 1.0-2
    vzctl: 3.0.29-3pve3
    vzprocps: 2.0.11-2
    vzquota: 3.0.12-3
    pve-qemu-kvm: 0.15.0-1
    ksm-control-daemon: 1.1-1
    Code:
    root@hp1:~# uname -a
    Linux hp1 2.6.32-6-pve #1 SMP Tue Nov 29 09:34:20 CET 2011 x86_64 GNU/Linux
    root@hp1:~#
     
  3. tux

    tux Member

    Joined:
    Jul 21, 2009
    Messages:
    54
    Likes Received:
    0
    really nice. so i can start use it agian. clvmd made so many problems that i stop testing proxmox 2.0. great work!
     
  4. hadyos

    hadyos Member

    Joined:
    Nov 9, 2008
    Messages:
    53
    Likes Received:
    0
    Hi,

    1. Testing beta3 I can see that all guests monitor icons in the GUI are black even if the the guest on,
    any idea why?
    2. Login from chrome on Windows 7 get me "Login failed, please try again", same login on Linux with firefox 8 login me in without errors.

    Thanks,
    hadyos
     
  5. vadq

    vadq New Member

    Joined:
    Oct 25, 2011
    Messages:
    3
    Likes Received:
    0
    After upgrade to bets3 vm couldn’t start
    Error in log:
    can't activate LV 'DRBD_D0:vm-102-disk-1': Skipping volume group drbdvg
    Volume groups search:
    root@proxmox01:/etc/lvm# vgs
    Skipping clustered volume group drbdvg
    VG #PV #LV #SN Attr VSize VFree
    pve 1 3 0 wz--n- 151.00g 1020.00m
     
  6. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    14,151
    Likes Received:
    69
    We do not use clvm by default anymore (you need to activate that manually if you really want to use it).
     
  7. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,000
    Likes Received:
    48
  8. udo

    udo Active Member

    Joined:
    Apr 22, 2009
    Messages:
    4,522
    Likes Received:
    23
    Hi,
    strange network-effect with the new kernel.
    Made yesterday (afternoon) an update on one server. On this server run an debian6-openvz-vm for backup (bacula).
    In the evening i see ping-lost to the host and vm - host/vm was absolutly idle.
    Reboot and all looks ok. Backup runs without trouble.
    Now it starts again with Packet-loss:
    Code:
    proxmox3 This host is flapping between states PING WARNING 2011-12-01 11:11:12 0d 0h 0m 57s 4/4 PING WARNING - Packet loss = 70%, RTA = 0.16 ms
    backup-srv PING CRITICAL 2011-12-01 11:10:21 0d 0h 50m 44s 4/4 PING CRITICAL - Packet loss = 80%, RTA = 0.16 ms
    
    The host/vm are absolutly calm:
    Code:
    top - 11:14:01 up 13:51,  1 user,  load average: 0.00, 0.00, 0.00
    Tasks: 223 total,   1 running, 222 sleeping,   0 stopped,   0 zombie
    Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
    Mem:   8134108k total,  3408204k used,  4725904k free,    61788k buffers
    Swap:  7340024k total,    18280k used,  7321744k free,  2945292k cached
    
        PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                              
          1 root      20   0  8356  784  648 S    0  0.0   0:00.85 init                                                                                                 
          2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd                                                                                             
          3 root      RT   0     0    0    0 S    0  0.0   0:00.06 migration/0                                                                                          
          4 root      20   0     0    0    0 S    0  0.0   0:00.13 ksoftirqd/0                                                                                          
          5 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/0                                                                                          
          6 root      RT   0     0    0    0 S    0  0.0   0:00.00 watchdog/0                                                                                           
          7 root      RT   0     0    0    0 S    0  0.0   0:00.10 migration/1                                                                                          
          8 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/1                                                                                          
          9 root      20   0     0    0    0 S    0  0.0   0:00.80 ksoftirqd/1                                                                                          
         10 root      RT   0     0    0    0 S    0  0.0   0:00.00 watchdog/1                                                                                           
         11 root      RT   0     0    0    0 S    0  0.0   0:00.10 migration/2                                                                                          
         12 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/2                                                                                          
         13 root      20   0     0    0    0 S    0  0.0   0:01.14 ksoftirqd/2                                                                                          
         14 root      RT   0     0    0    0 S    0  0.0   0:00.00 watchdog/
    ...
    Software up-to-date:
    Code:
    pve-manager: 2.0-12 (pve-manager/2.0/784729f4)
    running kernel: 2.6.32-6-pve
    proxmox-ve-2.6.32: 2.0-53
    pve-kernel-2.6.32-6-pve: 2.6.32-53
    lvm2: 2.02.86-1pve2
    clvm: 2.02.86-1pve2
    corosync-pve: 1.4.1-1
    openais-pve: 1.1.4-1
    libqb: 0.6.0-1
    redhat-cluster-pve: 3.1.7-1
    pve-cluster: 1.0-12
    qemu-server: 2.0-10
    pve-firmware: 1.0-13
    libpve-common-perl: 1.0-8
    libpve-access-control: 1.0-2
    libpve-storage-perl: 2.0-8
    vncterm: 1.0-2
    vzctl: 3.0.29-3pve3
    vzprocps: 2.0.11-2
    vzquota: 3.0.12-3
    pve-qemu-kvm: 0.15.0-1
    ksm-control-daemon: 1.1-1
    
    The network-driver is
    Code:
    modinfo sfc
    filename:       /lib/modules/2.6.32-6-pve/kernel/drivers/net/sfc/sfc.ko
    license:        GPL
    description:    Solarflare Communications network driver
    author:         Solarflare Communications and Michael Brown <mbrown@fensystems.co.uk>
    srcversion:     FD7AF0ECCCBF1333E8EC1A9
    alias:          pci:v00001924d00000813sv*sd*bc*sc*i*
    alias:          pci:v00001924d00000803sv*sd*bc*sc*i*
    alias:          pci:v00001924d00000710sv*sd*bc*sc*i*
    alias:          pci:v00001924d00000703sv*sd*bc*sc*i*
    depends:        i2c-core,mdio,i2c-algo-bit
    vermagic:       2.6.32-6-pve SMP mod_unload modversions 
    parm:           rx_alloc_method:Allocation method used for RX buffers (int)
    parm:           rx_refill_threshold:RX descriptor ring fast/slow fill threshold (%) (uint)
    parm:           rx_xoff_thresh_bytes:RX fifo XOFF threshold (int)
    parm:           rx_xon_thresh_bytes:RX fifo XON threshold (int)
    parm:           separate_tx_channels:Use separate channels for TX and RX (uint)
    parm:           rss_cpus:Number of CPUs to use for Receive-Side Scaling (uint)
    parm:           phy_flash_cfg:Set PHYs into reflash mode initially (int)
    parm:           irq_adapt_low_thresh:Threshold score for reducing IRQ moderation (uint)
    parm:           irq_adapt_high_thresh:Threshold score for increasing IRQ moderation (uint)
    parm:           debug:Bitmapped debugging message enable value (uint)
    parm:           interrupt_mode:Interrupt mode (0=>MSIX 1=>MSI 2=>legacy) (uint)
    
    I try an reboot with the older kernel (update2) and look if anything changed.

    Udo
     
  9. udo

    udo Active Member

    Joined:
    Apr 22, 2009
    Messages:
    4,522
    Likes Received:
    23
    now rebooted with old kernel (must downgrade because of the same name (for different kernels) i can't select the old kernel in grub).
    Since now it's looks normal - but i have only app. 4h uptime yet.
    The network-driver is the same with that kernel.

    Yesterday i made one more test (with new kernel), because i think about an powersaving-issue (the host are absolutly idle).
    But the CPU run's on normal speed and the ping-lost also happens after pveperf (the network trouble shows the dns-time):
    Code:
    pveperf
    CPU BOGOMIPS:      24079.19
    REGEX/SECOND:      1170800
    HD SIZE:           19.69 GB (/dev/mapper/pve-root)
    BUFFERED READS:    305.79 MB/sec
    AVERAGE SEEK TIME: 3.76 ms
    FSYNCS/SECOND:     5351.13
    DNS EXT:           1616.43 ms
    DNS INT:           1001.99 ms (xx.com)
    
    after reboot:
    pveperf
    CPU BOGOMIPS:      24081.00
    REGEX/SECOND:      1267790
    HD SIZE:           19.69 GB (/dev/mapper/pve-root)
    BUFFERED READS:    302.27 MB/sec
    AVERAGE SEEK TIME: 3.81 ms
    FSYNCS/SECOND:     5018.86
    DNS EXT:           73.05 ms
    DNS INT:           1.13 ms (xx.com)
    
    I hadn't install any software for powersaving...

    Udo
     
  10. JonB

    JonB Member

    Joined:
    Nov 28, 2011
    Messages:
    30
    Likes Received:
    0
    Tried proxmox ve beta3 on 2 i7, using 2 DRBD devices and cLVM? ontop of those. Migrate and online migrate works fine for a windows server 2008 r2 x64 guest. To simulate a serious breakdown I powered off one node, the node which was running the guest.

    I have since been unable to move the guest to the node that was stilling running using the webgui. Not start, migrate, stop, suspend, ... worked.

    I thought that HA failover was one of the targets on the roadmap for 2.0 ? Did I miss anyoption/config in the webgui, or is it still like this?
     
  11. udo

    udo Active Member

    Joined:
    Apr 22, 2009
    Messages:
    4,522
    Likes Received:
    23
    Now running some days without network-trouble with the older kernel:
    Code:
    pveversion -v
    pve-manager: 2.0-12 (pve-manager/2.0/784729f4)
    running kernel: 2.6.32-6-pve
    proxmox-ve-2.6.32: 2.0-53
    pve-kernel-2.6.32-6-pve: 2.6.32-52
    lvm2: 2.02.86-1pve2
    clvm: 2.02.86-1pve2
    corosync-pve: 1.4.1-1
    openais-pve: 1.1.4-1
    libqb: 0.6.0-1
    redhat-cluster-pve: 3.1.7-1
    pve-cluster: 1.0-12
    qemu-server: 2.0-10
    pve-firmware: 1.0-13
    libpve-common-perl: 1.0-8
    libpve-access-control: 1.0-2
    libpve-storage-perl: 2.0-8
    vncterm: 1.0-2
    vzctl: 3.0.29-3pve3
    vzprocps: 2.0.11-2
    vzquota: 3.0.12-3
    pve-qemu-kvm: 0.15.0-1
    ksm-control-daemon: 1.1-1
    
    Any idea how to find the issue??
    Two other test-hosts (with other network-cards) work without trouble...

    Udo
     
  12. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,000
    Likes Received:
    48
    HA is not yet implemented in the current beta.
     
  13. JonB

    JonB Member

    Joined:
    Nov 28, 2011
    Messages:
    30
    Likes Received:
    0
    Is it still on track for 2.0?
     
  14. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,000
    Likes Received:
    48
    yes, see roadmap.
     
  15. udo

    udo Active Member

    Joined:
    Apr 22, 2009
    Messages:
    4,522
    Likes Received:
    23
    Hi,
    just test the new kernel:
    Code:
    pveversion -v
    pve-manager: 2.0-14 (pve-manager/2.0/6a150142)
    running kernel: 2.6.32-6-pve
    proxmox-ve-2.6.32: 2.0-54
    pve-kernel-2.6.32-6-pve: 2.6.32-54
    lvm2: 2.02.86-1pve2
    clvm: 2.02.86-1pve2
    corosync-pve: 1.4.1-1
    openais-pve: 1.1.4-1
    libqb: 0.6.0-1
    redhat-cluster-pve: 3.1.7-1
    pve-cluster: 1.0-12
    qemu-server: 2.0-11
    pve-firmware: 1.0-13
    libpve-common-perl: 1.0-10
    libpve-access-control: 1.0-3
    libpve-storage-perl: 2.0-9
    vncterm: 1.0-2
    vzctl: 3.0.29-3pve6
    vzprocps: 2.0.11-2
    vzquota: 3.0.12-3
    pve-qemu-kvm: 1.0-1
    ksm-control-daemon: 1.1-1
    
    But the same strange thing with packet lost during ping:
    on management system:
    Code:
    ping 172.20.1.13
    PING 172.20.1.13 (172.20.1.13) 56(84) bytes of data.
    64 bytes from 172.20.1.13: icmp_req=7 ttl=63 time=0.088 ms
    64 bytes from 172.20.1.13: icmp_req=9 ttl=63 time=0.091 ms
    64 bytes from 172.20.1.13: icmp_req=14 ttl=63 time=0.086 ms
    
    on pve-host (tcp-dump):
    Code:
    13:01:13.557838 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 1, length 64
    13:01:14.557561 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 2, length 64
    13:01:15.557516 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 3, length 64
    13:01:16.557495 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 4, length 64
    13:01:17.557454 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 5, length 64
    13:01:18.557422 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 6, length 64
    13:01:19.557377 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 7, length 64
    13:01:19.557407 IP proxmox3 > 172.20.4.200: ICMP echo reply, id 4603, seq 7, length 64
    13:01:20.557345 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 8, length 64
    13:01:21.557315 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 9, length 64
    13:01:21.557342 IP proxmox3 > 172.20.4.200: ICMP echo reply, id 4603, seq 9, length 64
    13:01:22.557285 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 10, length 64
    13:01:23.557245 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 11, length 64
    13:01:24.557211 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 12, length 64
    13:01:25.557178 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 13, length 64
    13:01:26.557139 IP 172.20.4.200 > proxmox3: ICMP echo request, id 4603, seq 14, length 64
    13:01:26.557164 IP proxmox3 > 172.20.4.200: ICMP echo reply, id 4603, seq 14, length 64
    
    Any idea?

    Udo
     
  16. udi

    udi Member

    Joined:
    Apr 1, 2011
    Messages:
    73
    Likes Received:
    0
    hi,
    it doesn't make any trouble to me but the backup task suspends one of the vms when the backup starts and resumes it immediatelly.
    this doesn't happen to any of the other vms.
    all are kvm on lvm.
    here's the log:

    Code:
    Dec  9 22:54:09 genya vzdump[2807]: INFO: Finished Backup of VM 101 (00:09:08)
    Dec  9 22:54:10 genya vzdump[2807]: INFO: Starting Backup of VM 102 (qemu)
    Dec  9 22:54:10 genya qm[3589]: <root@pam> update VM 102: -lock backup
    Dec  9 22:54:11 genya qm[3595]: suspend VM 102: UPID:genya:00000E0B:00014649:4EE28383:qmsuspend:102:root@pam:
    Dec  9 22:54:11 genya qm[3594]: <root@pam> starting task UPID:genya:00000E0B:00014649:4EE28383:qmsuspend:102:root@pam:
    Dec  9 22:54:11 genya qm[3594]: <root@pam> end task UPID:genya:00000E0B:00014649:4EE28383:qmsuspend:102:root@pam: OK
    Dec  9 22:54:11 genya qm[3658]: <root@pam> starting task UPID:genya:00000E4B:00014693:4EE28383:qmresume:102:root@pam:
    Dec  9 22:54:11 genya qm[3659]: resume VM 102: UPID:genya:00000E4B:00014693:4EE28383:qmresume:102:root@pam:
    Dec  9 22:54:11 genya qm[3658]: <root@pam> end task UPID:genya:00000E4B:00014693:4EE28383:qmresume:102:root@pam: OK
    Dec  9 22:58:44 genya pvedaemon[2019]: <root@pam> successful auth for user 'root@pam'
    Dec  9 23:01:15 genya pvedaemon[2005]: worker 2019 finished
    Dec  9 23:01:15 genya pvedaemon[2005]: starting 1 worker(s)
    Dec  9 23:01:15 genya pvedaemon[2005]: worker 4145 started
    Dec  9 23:01:24 genya vzdump[2807]: INFO: Finished Backup of VM 102 (00:07:14)
    Dec  9 23:01:25 genya vzdump[2807]: INFO: Starting Backup of VM 108 (qemu)
     
  17. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,000
    Likes Received:
    48
    I assume you have 2 disks. post the full backup log.

    if you have 2 disks the vm needs to be suspended (1 or 2 seconds) to get a consistent backup.
     
  18. udi

    udi Member

    Joined:
    Apr 1, 2011
    Messages:
    73
    Likes Received:
    0
    bingo, i have 2 disks there :)

    i didn't know this, thank you.
     
  19. udo

    udo Active Member

    Joined:
    Apr 22, 2009
    Messages:
    4,522
    Likes Received:
    23
    Hi,
    looks that the new solarflare-driver fix this issue. I have compiled the new version https://support.solarflare.com/inde...ormat=raw&id=165&option=com_cognidox&Itemid=2 and since six hours all look fine.
    Code:
    modinfo sfcfilename:       /lib/modules/2.6.32-6-pve/kernel/drivers/net/sfc/sfc.ko
    version:        3.1.0.4091
    license:        GPL
    description:    Solarflare network driver
    author:         Solarflare Communications and Michael Brown <mbrown@fensystems.co.uk>
    srcversion:     3C884D1F996641354EDDB8D
    alias:          pci:v00001924d00000813sv*sd*bc*sc*i*
    ...
    
    Udo
     
  20. udo

    udo Active Member

    Joined:
    Apr 22, 2009
    Messages:
    4,522
    Likes Received:
    23
    Strange,
    i moved the 10GB-Nic from one server to an test-system - issue was on this system also.
    After updating the nic-driver the ping run's without trouble (one day).
    Then i move back the nic to the first server (with the updated modules) and the issue came back after two hours...

    Hmm, very strange.

    Udo
     

Share This Page