Search results

  1. B

    vzdump vz local snapshot question

    I'm trying to better understand vzdump and how it does a snapshot for an openvz . the vz's are at /var/lib/vz . how does vzdump make a snapshot of a directory like /var/lib/vz/private/107 ? as I'd like to use that same method to make snapshots of then backup directories with in the vz.
  2. B

    Bug in DRBD causes split-brain, already patched by DRBD devs

    module-assistantOK I have this part done: mkdir drbd cd drbd apt-get install git-core git-buildpackage fakeroot debconf-utils docbook-xml docbook-xsl dpatch xsltproc autoconf flex pve-headers-2.6.32-11-pve module-assistant git clone http://git.drbd.org/drbd-8.3.git cd drbd-8.3 git checkout...
  3. B

    VNC-Console - keyboard issue

    thank you that is easy. is there a way to use a different port number?
  4. B

    Bug in DRBD causes split-brain, already patched by DRBD devs

    I thought it was just user land. then when a new pve kernel is installed, we do not need to build a module . correct? [ AFAIR rebuilding a module was suggested in one of these drbd threads].
  5. B

    Bug in DRBD causes split-brain, already patched by DRBD devs

    e100: I have a question about drbd8-utils . do drbd8-utils contain a kernel module or just the management programs?
  6. B

    LVM Sanphot bakup problems

    We are using NFS running on a Proxmox 2.1 system for backups with out problems. the backup file system is ext3. our largest backups are 20GB . Is anyone having issues with nfs on proxmox ?
  7. B

    OpenVZ on DRBD: how to failover?

    Can you post more details on your set up ? For instance the scripts? Also - are you using primary/primary drbd?
  8. B

    KVM optimization

    With ha working great for kvm on drbd I'm tempted to switch a container to kvm. this container is where we do a lot of data entry. But speed tests on our data show ct 7 times faster then kvm. The system runs debian etch. Inside the system, memory and cpu usage are low. In case someone...
  9. B

    kvm migration fails, exit code 250

    re: ide2: none,media=cdrom thanks. added that to kvm.
  10. B

    kvm migration fails, exit code 250

    from last nights backup, the is the orig 101.conf: fbc241 s012 /bkup/rsnapshot-for-systems/daily.0/fbc241/etc/pve/nodes/fbc241/qemu-server # cat 101.conf #will move mail here #10.100.1.5 srv5.fantinibakery.com srv5 # mail wheezy kvm 2012-05-02 # on drbd for high availability bootdisk...
  11. B

    kvm migration fails, exit code 250

    OK I got 101 to migrate tried: qm unlock 101 [code Executing HA migrate for VM 101 to node fbc241 Trying to migrate pvevm:101 to fbc241...Temporary failure; try again TASK ERROR: command 'clusvcadm -M pvevm:101 -m fbc241' failed: exit code 250 [/code] tried: remove from cluster then migrate...
  12. B

    kvm migration fails, exit code 250

    OK that edit to debug did not work did not find the line. so i tried 1- remove 100 from cluster on pve page 2- start from cli and got: qm start 100 VM is locked (backup) so qm unlock 100 qm start 100 that started it. I think the issue was caused by rebooting the systems during the...
  13. B

    kvm migration fails, exit code 250

    [code] fbc241 s012 ~ # clusvcadm -d pvecm:100 Local machine disabling pvecm:100...Service does not exist [code] [code] fbc240 s009 /etc/pve/qemu-server # clusvcadm -d pvecm:100 Local machine disabling pvecm:100...Service does not exist [code] next I'll try from...
  14. B

    kvm migration fails, exit code 250

    more info fbc241 s012 ~ # fence_tool ls fence domain member count 3 victim count 0 victim now 0 master nodeid 1 wait state none members 1 2 4 fbc241 s012 ~ # clustat Cluster Status for fbcluster @ Sat May 5 11:09:25 2012 Member Status: Quorate Member Name...
  15. B

    kvm migration fails, exit code 250

    just noticed qemu-img: Could not open '/dev/drbd-fbc241/vm-100-disk-1': No such file or directory that is from a different kvm , trying to start thaat results in fbc241 s012 ~ # qm start 100 Executing HA start for VM 100 Member fbc241 trying to enable pvevm:100...Aborted; service failed...
  16. B

    kvm migration fails, exit code 250

    and lvs and qm list for the 2 drbd systems: fb240 where kvm 101 is running fbc240 s009 /etc/pve/qemu-server # lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert vm-1023-disk-1 drbd-fbc240 -wi-ao-- 32.01g...
  17. B

    kvm migration fails, exit code 250

    here is the conf file : # on drbd for high availability bootdisk: virtio0 cores: 4 cpu: host ide2: cdrom,media=cdrom memory: 2048 name: mail-system net0: virtio=86:CF:B2:A1:41:7C,bridge=vmbr0 onboot: 1 ostype: l26 sockets: 1 virtio0: drbd-fbc241:vm-101-disk-1
  18. B

    kvm migration fails, exit code 250

    I get this trying to migrate a kvm: Executing HA migrate for VM 101 to node fbc241 Trying to migrate pvevm:101 to fbc241...Temporary failure; try again TASK ERROR: command 'clusvcadm -M pvevm:101 -m fbc241' failed: exit code 250 However 4 other kvm's migrate back and forth with an issue...
  19. B

    Problem with "Create CT on nfs storage"

    why use lenny-backports? that line in sources.list should probably be removed . then wun aptitude update and install nfs-kernel-server from squeeze. I am not sure that will solve the issue. But it is better to use squeeze stable packages.
  20. B

    "tar: write error" during a Restore

    the restore ended up working: tar: write error 4122+93066018 records in 491520+0 records out 128849018880 bytes (129 GB) copied, 4299.35 s, 30.0 MB/s TASK OK