Proxmox VE 3.2 released!

1- In the Roadmap of PVE 3.2, I read: "Ceph Server (Technology Preview)" and "Open vSwitch support (Technology Preview)", also in the wiki of SPICE say "Proxmox VE 3.1 introduced Spice as a technology preview.", then do not you think that these features should be out of use in production environments?, ie out of this release?
(Although for me are excellent features when ready for use in production environments).
it's stable, but ceph firefly release (LTS) is coming in 1month. so I think it should be "production" ready at this time.


2- If Open vSwitch is as Technology Preview on PVE 3.2, should I be careful some things specially?
maybe it's better to use kernel 3.10, but it's not stable yet. (waiting for rhel7 release)


3- In PVE 3.1 and old versions, bonding balance-alb don't work correctly, inclusive with a Switch unmanaged (i believe that is a bug of Kernel), with Open vSwitch i will have the same problem?
balance-alb, is more or less a trick and it's non standard.
opensvwtich only support active-backup,lacp,and balance-slb
 
sorry to insiste , but I ttried every thing , there is no way to open that v file with virt viewer , i tried to drag and drop, command line, .... debug mode, I know that I have 20 sec limitation but :/ frustrating :)
 
sorry to insiste , but I ttried every thing , there is no way to open that v file with virt viewer , i tried to drag and drop, command line, .... debug mode, I know that I have 20 sec limitation but :/ frustrating :)

As it works here and on other places there is something wrong and/or different on your side. If you cannot figure it out, we are also offering direct help via our commercial support packages.
 
I made it Work on Firefox 27.0.1
Chrome 33.0.1750.146 m download a VV file
IE stand still

I don't think that it work by default on Chrome or IE, I made fresh OS install just for the test, only firefox behave well

thanks for your patience

regards
Mhamed
 
Thanks for your reply, Tom. I greatly appreciate it. I had all the steps right - just missed a syntax error in my sources.list.
 
Hello and thanks for the great work on this release! The introduction of Ceph into ProxMox was something I hoped for for some time but did not think it would make it in this fast and done as nicely as it is.

At the moment, I'm trying to test it just to see how it operates and I only have two servers available for testing. I understand from another post that this should work but with out fail over capability.

What I did was installed V3.2 on two Dell PE515 hosts with 4 2TB disks and a 10GBit cards. I was able to get the PVE installed, created the PVE cluster and added the second server to the cluster. I then followed the instructions for setting up Ceph:

pveceph install on each host.

pveceph init --network 10.10.10.0/24 on the first host.

pveceph createmon on just the first host. Not the second to keep an old number of monitors


I then went into the GUI and created the OSDs on the first hosts. All is good so far and I see the ODSs in the Ceph-Status screen...

I then went to the second host to create the OSDs but when creating one, it timed out with "Time Out (500)". I then see this time out an both servers.

I also see this in the status bellow: "TASK ERROR: cluster not ready - no quorum?"

Any idea on what is causing this?

Thanks,

-Glen
 
Here is what I'm seeing on the two nodes:


root@pmtest1:~# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: pmtest
Cluster Id: 7006
Cluster Member: Yes
Cluster Generation: 72
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: pmtest1
Node ID: 1
Multicast addresses: 239.192.27.121
Node addresses: 10.2.10.101


root@pmtest2:~# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: pmtest
Cluster Id: 7006
Cluster Member: Yes
Cluster Generation: 72
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: pmtest2
Node ID: 2
Multicast addresses: 239.192.27.121
Node addresses: 10.2.10.102
 
balance-alb, is more or less a trick and it's non standard.
opensvwtich only support active-backup,lacp,and balance-slb

Hi spirit and thanks for your answer

I see in the GUI of PVE 3.2 the linux stack and the OVS stack, then i would like to know these doubts:

1- There are future plans to remove the linux stack?... i always use it for balance-rr with DRBD in mode NIC-to-NIC
2- Have OVS a equivalent for balance-rr for use with DRBD NIC-to-NIC?, and if is correct, will be better in performace?
(I did search in Internet, google, etc, but unsuccessful)

Best regards
Cesar
 
1- There are future plans to remove the linux stack?... i always use it for balance-rr with DRBD in mode NIC-to-NIC

No.

2- Have OVS a equivalent for balance-rr for use with DRBD NIC-to-NIC?,

AFAIK there is no such mode.

We just added OVS because some people need features like sflow support.
 
Seems you have quorum now - do you still get the same error message? And please can you open a new thread for this topic?

Deitmar,

For some reason, the issue went away. What was happening was the monitor on the first host crashed when I tried to create the new OSDs on the second host. I think it may have had something to do with trying to create the OSDs on the second host too soon after the first host. Once I let things sit for a couple of minutes I was able to create the OSDs on the second host with out crashing the monitors.

All is working now and while I'm seeing some poor performance at the moment (100MBps writes & 200MBps reads), its clear my current implementation is not optimal. Now that I have a fictional implementation, I will start to work on understanding performance improvements.

Thanks again for a great product!

-Glen
 
All is working now and while I'm seeing some poor performance at the moment (100MBps writes & 200MBps reads), its clear my current implementation is not optimal. Now that I have a fictional implementation, I will start to work on understanding performance improvements.

Hello glena, can you post the results of your test, and if for you is possible, with this options:
1- For the VM: Cache = NONE
2- For the VM: HDD device = Virtio
3- Make tests with bonnie++

And also show the configuration of your RAID (if you have it configured), Net and HDDs for CEPH.

Best regards
Cesar

Re-Edit: Ah, I did forget say that do these tests when you finish the performance improvements
 
Last edited:
VM crashing during backup

We just released Proxmox VE 3.2, introducing great new features!
To quote from the release notes:
update qemu to 1.7.0 [...] improved live backup
Since updating our testing-environment we experience strange backup errors. We backup three VMs, which are using a LVM-storage. The backup is done with snapshot mode. using Two backups are proceeding without error, the last VM is crashing during the backup. The only diffrence I can see is, that the crashing VM has four virtual harddisks. How can I get my fingers on the cause of that behaviour? Is there any log-file I could read?
 
Re: VM crashing during backup

The backup log?
I already did that, unfortunately it says:
Code:
  101: Mar 13 00:42:11 INFO: Starting Backup of VM 101 (qemu)
  101: Mar 13 00:42:11 INFO: status = running
  101: Mar 13 00:42:11 INFO: update VM 101: -lock backup
  101: Mar 13 00:42:12 INFO: exclude disk 'ide1' (backup=no)
  101: Mar 13 00:42:12 INFO: backup mode: snapshot
  101: Mar 13 00:42:12 INFO: ionice priority: 7
  101: Mar 13 00:42:12 INFO: creating archive '/mnt/pve/nas-backup/dump/vzdump-qemu-101-2014_03_13-00_42_11.vma.lzo'
  101: Mar 13 00:42:12 INFO: started backup task 'e3947a88-205a-42c1-9e7c-a352bcef5837'
  101: Mar 13 00:42:15 INFO: status: 0% (806354944/343597383680), sparse 0% (33640448), duration 3, 268/257 MB/s
  101: Mar 13 00:42:26 INFO: status: 1% (3516399616/343597383680), sparse 0% (105631744), duration 14, 246/239 MB/s
  101: Mar 13 00:45:53 INFO: status: 2% (7066157056/343597383680), sparse 0% (204587008), duration 221, 17/16 MB/s
  101: Mar 13 00:46:07 INFO: status: 3% (10373693440/343597383680), sparse 0% (258056192), duration 235, 236/232 MB/s
  101: Mar 13 00:49:00 ERROR: VM 101 not running
  101: Mar 13 00:49:00 INFO: aborting backup job
  101: Mar 13 00:49:00 ERROR: VM 101 not running
  101: Mar 13 00:49:02 ERROR: Backup of VM 101 failed - VM 101 not running
So there's not much to learn from that. I was looking for some hint why the VM suddenly stops.
 
Re: VM crashing during backup

I
So there's not much to learn from that. I was looking for some hint why the VM suddenly stops.

Most likely a qcow2 file with format errors? Try to verify with 'qemu-img check'. And please open a new thread for this topic.
 
Re: VM crashing during backup

I see that 3.2 release may have issues (at least on some nodes, seeing recent forum posts) with backup and vm being shut down or crash... may this be related to new qemu 1.7 backup improvements?
- can you confirm this, are you investigating or is it just people doing wrong things?
- is there any way to easily get back to 3.1 if someone has the same (or other) issues, for any reason (eg: repositories "targets") as you did in the past for 2.1, 2.2? (I've asked this before, but nobody ever answered.)

Thanks,
Marco
 
Last edited:
Re: VM crashing during backup

- can you confirm this, are you investigating or is it just people doing wrong things?

Backup works here. But I suggest that you simply test yourself, and if you find a bug, report it (Note: users also reports bugs for 3.1, 3.0, 2.3, 2,2, ...). We are usually quite fast in fixing bugs as soon as we have a reproducible test case.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!