Search results

  1. D

    Adding PVE4 node to PVE3.4 cluster

    Is it possible? I tried, but with errors. # pvecm add 10.1.0.11 copy corosync auth key rsync: change_dir "/etc/corosync" failed: No such file or directory (2) rsync: link_stat "/etc/pve/corosync.conf" failed: No such file or directory (2) rsync error: some files/attrs were not transferred (see...
  2. D

    CMD hangs VM

    Hi, I have the stable issue. Crystal Disk Mark hangs VM in 100% of cases when VM is launched on PVE 3.4 with kernel 3.10 and VM's disk is qcow2. With kernel 2.6 or any other format of disk no such problem. When I stop the hanged VM I get the next messages: root@hq-vmc1-1:~# qm stop 121 VM still...
  3. D

    Read from ZFS is very slow

    In fact, there is a third node in this cluster. This node has only one HDD SATA 250GB. And I watch the similar result on the third node. root@hq-vmc1-3:~# pveversion pve-manager/3.4-3/2fc72fee (running kernel: 2.6.32-37-pve) root@hq-vmc1-3:~# parted /dev/sda print free Model: ServeRA System...
  4. D

    Read from ZFS is very slow

    root@hq-vmc1-1:~# cat /etc/modprobe.d/zfs.conf # ZFS tuning for a proxmox machine that reserves 16GB for ZFS # # Don't let ZFS use less than 4GB and more than 16GB options zfs zfs_arc_min=4294967296 options zfs zfs_arc_max=17179869184 # # disabling prefetch is no longer required options zfs...
  5. D

    Read from ZFS is very slow

    Hi, I have two node cluster with ZFS on each node. And I see very slow work of ZFS On the first node (pve-manager/3.4-3/2fc72fee (running kernel: 2.6.32-37-pve)): root@hq-vmc1-1:/# dd bs=10M count=200 if=/dev/sda of=/dev/null 2097152000 bytes (2.1 GB) copied, 3.55839 s, 589 MB/s...
  6. D

    Proxmox 3.0 Cluster Node Web Interface Problems

    Hello, After I have added third node to the cluster I have the same messages in syslog. Mar 26 03:24:04 hq-vmc1-3 pveproxy[6332]: problem with client 10.1.0.11; ssl3_read_bytes: ssl handshake failure Mar 26 03:24:04 hq-vmc1-3 pveproxy[6332]: Can't call method "timeout_reset" on an undefined...
  7. D

    Very high load of the node

    Hello, Yesterday I have upgraded to PVE 3.4 from 3.2 and today I have big problems. The load of the node suddenly grew at 12:00. I can’t find the cause. Some servers hang with the next messages: # iotop -d 10 -P Total DISK READ: 21.17 K/s | Total DISK WRITE: 2.82 M/s PID PRIO...
  8. D

    Quick migration using ZFS

    Hi, I tried to implement a Quick Migration using ZFS snapshot/send/receive. It works, but not as fast as I would like. Migration of the server with 1 GB of RAM takes about 1 minute. Almost all of the time is spent on saving, copying and restoring the state of VM. Here is an example of commands...
  9. D

    Backup issues with proxmox 2.3?

    Hi, I have the same problem every day. 31.12.2013 VMID NAME STATUS TIME SIZE FILENAME 100 OK 0:03:40 10.11GB /pool1/backup/dump/vzdump-qemu-100-2013_12_31-00_55_02.vma.lzo 102 FAILED 0:00:18 got timeout 105 FAILED 0:00:08 got timeout 111 FAILED 0:00:05 got timeout 112 OK 0:38:43...
  10. D

    qemu-nbd bug

    Hi everyone, I have exactly the same problem. root@hq-vmc1-1:/var/lib/storages/csv1# qemu-nbd -c /dev/nbd0 disk.qcow2 root@hq-vmc1-1:/var/lib/storages/csv1# parted /dev/nbd0 GNU Parted 2.3 Using /dev/nbd0 Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print free Error...
  11. D

    Backup with RAM

    Hi! It is possible to take snapshots including RAM. But is it possible to create a backup the same manner, including the RAM?
  12. D

    KVM performance on different operating systems

    Ok Thanks I'll look for the cause
  13. D

    KVM performance on different operating systems

    First of all I've updated all firmwares. What is 'HT'?
  14. D

    KVM performance on different operating systems

    # uname -a Linux hq-sr-v1 3.2.0-0.bpo.3-amd64 #1 SMP Thu Aug 23 07:41:30 UTC 2012 x86_64 GNU/Linux # cat /proc/meminfo | grep Huge AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB # cat...
  15. D

    KVM performance on different operating systems

    No. As I showed before, there are no problem in storage i/o.
  16. D

    KVM performance on different operating systems

    You are wrong. I've tested the kernels 10 and 12 and I got exactly the same result as with kernel 14.
  17. D

    KVM performance on different operating systems

    YES!!! With this kernel I got exactly the same result as in CentOS. The small area inside the graph is Proxmox with kernel 2.6.32-14-pve. Thanks! :D
  18. D

    KVM performance on different operating systems

    I found free benchmark and run on VM on both OS. As you can see, only file i/o is better in Proxmox. I've tested ubuntu 12.04 http://forum.proxmox.com/threads/11248-KVM-performance-on-different-operating-systems?p=61720#post61720 It's performance is closer to CentOS. I know, that there is the...
  19. D

    KVM performance on different operating systems

    For the tests I use VM with only one VCPU. Are there any benchmarks for various virtualization subsystems that I could run? I've already tested the storage subsystem, performance is the same in both operation systems.
  20. D

    KVM performance on different operating systems

    apt-get upgrade didn't work. I had to run aptitude full-upgrade. Now I have the next: # kvm --version QEMU emulator version 1.1.1 (qemu-kvm-devel), Copyright (c) 2003-2008 Fabrice Bellard # uname -a Linux hq-sr-v1 2.6.32-14-pve #1 SMP Tue Aug 21 08:24:37 CEST 2012 x86_64 GNU/Linux But I still...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!