Is it possible?
I tried, but with errors.
# pvecm add 10.1.0.11
copy corosync auth key
rsync: change_dir "/etc/corosync" failed: No such file or directory (2)
rsync: link_stat "/etc/pve/corosync.conf" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see...
Hi,
I have the stable issue.
Crystal Disk Mark hangs VM in 100% of cases when VM is launched on PVE 3.4 with kernel 3.10 and VM's disk is qcow2.
With kernel 2.6 or any other format of disk no such problem.
When I stop the hanged VM I get the next messages:
root@hq-vmc1-1:~# qm stop 121
VM still...
In fact, there is a third node in this cluster.
This node has only one HDD SATA 250GB.
And I watch the similar result on the third node.
root@hq-vmc1-3:~# pveversion
pve-manager/3.4-3/2fc72fee (running kernel: 2.6.32-37-pve)
root@hq-vmc1-3:~# parted /dev/sda print free
Model: ServeRA System...
root@hq-vmc1-1:~# cat /etc/modprobe.d/zfs.conf
# ZFS tuning for a proxmox machine that reserves 16GB for ZFS
#
# Don't let ZFS use less than 4GB and more than 16GB
options zfs zfs_arc_min=4294967296
options zfs zfs_arc_max=17179869184
#
# disabling prefetch is no longer required
options zfs...
Hi,
I have two node cluster with ZFS on each node.
And I see very slow work of ZFS
On the first node (pve-manager/3.4-3/2fc72fee (running kernel: 2.6.32-37-pve)):
root@hq-vmc1-1:/# dd bs=10M count=200 if=/dev/sda of=/dev/null
2097152000 bytes (2.1 GB) copied, 3.55839 s, 589 MB/s...
Hello,
After I have added third node to the cluster I have the same messages in syslog.
Mar 26 03:24:04 hq-vmc1-3 pveproxy[6332]: problem with client 10.1.0.11; ssl3_read_bytes: ssl handshake failure
Mar 26 03:24:04 hq-vmc1-3 pveproxy[6332]: Can't call method "timeout_reset" on an undefined...
Hello,
Yesterday I have upgraded to PVE 3.4 from 3.2 and today I have big problems. The load of the node suddenly grew at 12:00. I can’t find the cause. Some servers hang with the next messages:
# iotop -d 10 -P
Total DISK READ: 21.17 K/s | Total DISK WRITE: 2.82 M/s
PID PRIO...
Hi,
I tried to implement a Quick Migration using ZFS snapshot/send/receive. It works, but not as fast as I would like. Migration of the server with 1 GB of RAM takes about 1 minute. Almost all of the time is spent on saving, copying and restoring the state of VM.
Here is an example of commands...
Hi,
I have the same problem every day.
31.12.2013
VMID
NAME
STATUS
TIME
SIZE
FILENAME
100
OK
0:03:40
10.11GB
/pool1/backup/dump/vzdump-qemu-100-2013_12_31-00_55_02.vma.lzo
102
FAILED
0:00:18
got timeout
105
FAILED
0:00:08
got timeout
111
FAILED
0:00:05
got timeout
112
OK
0:38:43...
Hi everyone,
I have exactly the same problem.
root@hq-vmc1-1:/var/lib/storages/csv1# qemu-nbd -c /dev/nbd0 disk.qcow2
root@hq-vmc1-1:/var/lib/storages/csv1# parted /dev/nbd0
GNU Parted 2.3
Using /dev/nbd0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print free
Error...
I found free benchmark and run on VM on both OS.
As you can see, only file i/o is better in Proxmox.
I've tested ubuntu 12.04 http://forum.proxmox.com/threads/11248-KVM-performance-on-different-operating-systems?p=61720#post61720
It's performance is closer to CentOS.
I know, that there is the...
For the tests I use VM with only one VCPU.
Are there any benchmarks for various virtualization subsystems that I could run?
I've already tested the storage subsystem, performance is the same in both operation systems.
apt-get upgrade didn't work. I had to run aptitude full-upgrade.
Now I have the next:
# kvm --version
QEMU emulator version 1.1.1 (qemu-kvm-devel), Copyright (c) 2003-2008 Fabrice Bellard
# uname -a
Linux hq-sr-v1 2.6.32-14-pve #1 SMP Tue Aug 21 08:24:37 CEST 2012 x86_64 GNU/Linux
But I still...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.