Hi people
I will be very grateful to anyone who can help me
I think that i lose the PVE Cluster since that vzdump is in process, "all VMs run in local mode", and i have configured to do vzdump snapshot for all VMs and all PVE hosts to a NFS shared Server since 00 Hrs because nobody are working since that time
I use only KVM VMs, and my PVE 2.3 Nodes have cfq i/o scheduler configured.
Please see the graphs and consider that i see almost the same error (cut of secuence of graphs in all resources) in all PVE 2.3 Nodes including a PVE Host that currently isn't running VMs but is into the PVE Cluster
I show a graph from the Host:
I show a graph from a VM:
This is my escenery:
For Backup of VMs:
1 PC with:
Hardware: 1 PC workstation with Intel core 2 Duo / SATA II / 2 NICs 1 Gb/s
Software: Centos 6.3 x64 + NFS shared for all PVE Hosts, bonding balance-xor
For PVEs Nodes (All Hosts have NICs 1 Gb/s):
1 PC with:
Hardware: Server DELL + SAS 2 disks in RAID5
Software: PVE 1.8 + bonding balance-alb + KVM Virtual Disks are in qcow2 format
Note: This machine never showed problems.
3 PCs with PVE 2.3:
Hardware: Servers DELL + SAS 1 disks in RAID5
Software: PVE 2.3 + bonding active-backup + in 1 PVE Host i have KVM Virtual Disks in qcow2 format and the others PVE Hosts have KVM Virtual Disks in raw format.
2 PCs with:
Hardware: Workstation + SATA III + 2 NICs for PVE Cluster + 2 NICs for DRBD
Software: PVE 2.3 + bonding active-backup for PVE Cluster + madam RAID1 + DRBD 4.2 (DRBD is out of RAID) + All KVM Virtual Disks are in raw format into DRBD (LVM on top of DRBD).
I believe that I must touch the vzdump.conf file, properly the options ionize and/or bwlimit in each PVE Host
Any suggestion is welcome, and if possible with the fundament
Best regards to all comunity
Cesar
I will be very grateful to anyone who can help me
I think that i lose the PVE Cluster since that vzdump is in process, "all VMs run in local mode", and i have configured to do vzdump snapshot for all VMs and all PVE hosts to a NFS shared Server since 00 Hrs because nobody are working since that time
I use only KVM VMs, and my PVE 2.3 Nodes have cfq i/o scheduler configured.
Please see the graphs and consider that i see almost the same error (cut of secuence of graphs in all resources) in all PVE 2.3 Nodes including a PVE Host that currently isn't running VMs but is into the PVE Cluster
I show a graph from the Host:
I show a graph from a VM:
This is my escenery:
For Backup of VMs:
1 PC with:
Hardware: 1 PC workstation with Intel core 2 Duo / SATA II / 2 NICs 1 Gb/s
Software: Centos 6.3 x64 + NFS shared for all PVE Hosts, bonding balance-xor
For PVEs Nodes (All Hosts have NICs 1 Gb/s):
1 PC with:
Hardware: Server DELL + SAS 2 disks in RAID5
Software: PVE 1.8 + bonding balance-alb + KVM Virtual Disks are in qcow2 format
Note: This machine never showed problems.
3 PCs with PVE 2.3:
Hardware: Servers DELL + SAS 1 disks in RAID5
Software: PVE 2.3 + bonding active-backup + in 1 PVE Host i have KVM Virtual Disks in qcow2 format and the others PVE Hosts have KVM Virtual Disks in raw format.
2 PCs with:
Hardware: Workstation + SATA III + 2 NICs for PVE Cluster + 2 NICs for DRBD
Software: PVE 2.3 + bonding active-backup for PVE Cluster + madam RAID1 + DRBD 4.2 (DRBD is out of RAID) + All KVM Virtual Disks are in raw format into DRBD (LVM on top of DRBD).
I believe that I must touch the vzdump.conf file, properly the options ionize and/or bwlimit in each PVE Host
Any suggestion is welcome, and if possible with the fundament
Best regards to all comunity
Cesar
Last edited: