Question on cluster

johjoh

Member
Feb 4, 2011
43
0
6
Hi, I have some doubt before create a cluster, this is my situation:
- Server1: eth0 and eth1 in bond0 802.3ad - Cluster Master - NO VIRTUAL MACHINE
- Server2: eth0 and eth1 in bond0 802.3ad - No Cluster Configured - VM101 VM102 VM103
- Server3: eth0 and eth1 in bond0 802.3ad - No Cluster Configured - VM104 VM105 VM106 VM107 VM108

The VM in Server2 and Server3 created with consecutive ID, can I add this two node without losing existent VM? Or before I have to make a backup?

Thank you!
 
Hi,
should worked: all hosts has they own network config. Storage-config from master will synced (if you only use local you don't have trouble). cdrom-images are synced also from the master - if your clients use cdrom-images (like live-cd ) you must first copy this to the master.

Udo
 
Ok cluster created, actually all VM are on NODE1 MASTER and I have a problem.
I have shutdown one VM, I have migrated it correctly to the NODE2, when I start it play slow slow slow (Windows 2003), so I have tourned it off and re-migrated to the master, now is fast.

What's the problem? Network (Bond) or RAID?

Thank you
 
Ok cluster created, actually all VM are on NODE1 MASTER and I have a problem.
I have shutdown one VM, I have migrated it correctly to the NODE2, when I start it play slow slow slow (Windows 2003), so I have tourned it off and re-migrated to the master, now is fast.

What's the problem? Network (Bond) or RAID?

Thank you
Hi,
there a a lot of possibilities...
check ioperformance on the node (pveperf for a fast overview). If the VM are very slow, it can be something with the hardware-virtualisation on the node. Is this correctly enabled in the bios?
Are others KVM-VMs fast?


Udo
 
Thank you Udo, sorry for my english, so hardware:
- BIOS functionality for virtualization are enabled
- Master it's a Fujitsu Primergy RX300 S5, RAID Controller SAS LSI 6G with battery, 2 nic ethernet gigabit
- Node1 it's a Fujitsu Primergy RX200 S5, RAID Controller SAS LSI 5/6 with battery, 2 nic ethernet gigabit

Software:
- Proxmox 1.7 on both server updated
- Bond on both server with 802.3ad protocol
- if the VM run on Master they are quickly
- if the VM run on Node1 they are so slowly
- no migration error from Master to Node1
- no migration error from Node1 to Master
- migration slow 45/50 MBps, with FastSCP I've copied in past at 136 MBps

Consideration:
- I've moved the VM to the master before put the Node1 in the cluster
- Before I've put the Node1 on the cluster, the VM were running on it and were so quickly
- I dont think that is a problem of hardware, because in the past the same hardware was running ProxMox in indipendent mode and was so quickly!
- It seem that the Node1 put the disk in readonly mode in the VM and Windows 2003 goes slow slow slow

Any help appreciated so much!
 
Last edited:
Thank you Udo, sorry for my english, so hardware:
- BIOS functionality for virtualization are enabled
- Master it's a Fujitsu Primergy RX300 S5, RAID Controller SAS LSI 6G with battery, 2 nic ethernet gigabit
- Node1 it's a Fujitsu Primergy RX200 S5, RAID Controller SAS LSI 5/6 with battery, 2 nic ethernet gigabit

Software:
- Proxmox 1.7 on both server updated
- Bond on both server with 802.3ad protocol
- if the VM run on Master they are quickly
- if the VM run on Node1 they are so slowly
- no migration error from Master to Node1
- no migration error from Node1 to Master
- migration slow 45/50 MBps, with FastSCP I've copied in past at 136 MBps

Consideration:
- I've moved the VM to the master before put the Node1 in the cluster
- Before I've put the Node1 on the cluster, the VM were running on it and were so quickly
- I dont think that is a problem of hardware, because in the past the same hardware was running ProxMox in indipendent mode and was so quickly!
- It seem that the Node1 put the disk in readonly mode in the VM and Windows 2003 goes slow slow slow

Any help appreciated so much!
Hi,
you can look with iostat (apt-get install sysstat) if other processes makes heavy IO on the disk (e.g. iostat -dm 5 sda).
Have you tried to plug off one cable from the bond?
If you start the vm from the command line, are there any messages (e.g. qm start 101 ).?
What kind of disk-image do you use? Raw, or qcow2? With qcow2 i had one time a curios of slowness - after converting to raw, all runs fine.

Udo
 
This night I try, I use qcow2 maybe this is the problem.
For personal skill, why qcow2 create problem and raw not?

Thank you
See you tomorrow for response of the tests
 
I've migrated 1 of the 3 VM on the Node1:
- conversion from QCOW2 to RAW same speed
- Added disks as VIRTIO, and not IDE, same speed
- added ",cache=none" to the disks same speed
- result of test:
Code:
Linux 2.6.32-4-pve (emu02)      03/01/11        _x86_64_

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              37.01         0.12         7.01        712      40270

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               0.60         0.00         0.01          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               0.60         0.00         0.01          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               1.80         0.00         0.01          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               6.00         0.04         0.03          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              21.40         0.10         0.04          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              10.20         0.02         0.07          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             100.00         1.07         0.22          5          1

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             138.52         1.96         0.27          9          1

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              74.20         0.80         0.20          4          1

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              57.60         0.64         0.13          3          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              21.20         0.13         0.10          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              23.20         0.03         0.15          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               7.00         0.08         0.02          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              18.40         0.04         0.09          0          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              87.00         1.24         0.01          6          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             139.00         1.73         0.09          8          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             155.20         0.81         0.02          4          0

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               3.00         0.00         0.03          0          0

I'm thinking to dismatle the cluster... :-(