Search results

  1. A

    ASP+MSSQL is 2x slower than in other environments.

    CrystalMark benchmark I run 3 weeks ago on a VM. I also tried to put DB files on a usb flashdisk to simulate slower disk performance on a notebook, but it was still 2x faster than on Proxmox VM. It doesn't look like a slow storage problem...
  2. A

    ASP+MSSQL is 2x slower than in other environments.

    No running VMs. DRBD is running on a separate disk of the same type as a system disk. pveperf CPU BOGOMIPS: 119994.48...
  3. A

    ASP+MSSQL is 2x slower than in other environments.

    Hi, I have an ASP.NET + MSSQL 2012 application on Windows 2012 R2 VM (LVM storage, virtio drivers). The problem is that this application is about 2x slower than on ESXi VM or on real HW server (similar config) or even on a working notebook. I've tried to use...
  4. A

    Openvswitch restart after an update breaks DRBD (split-brain)

    Hi, I have 3.3 with DRBD+LVM storage on ovs network. There was an update for openvswitch which restarted the service and disconnected the network for a few seconds, but it was enough for a split-brain. Is it possible to disable restarting of openvswitch after an update? I guess it is defined in...
  5. A

    [SOLVED] GlusterFS for backups doesn't work.

    It must be some bug in VZDump.pm, glusterfs is missing here: die "can't use storage type '$type' for backup\n" if (!($type eq 'dir' || $type eq 'nfs'));
  6. A

    [SOLVED] GlusterFS for backups doesn't work.

    Hi, I'm trying to make a CTs backup to a glusterfs storage (Proxmox 3.1) and it fails with this error: can't use storage type 'glusterfs' for backup (500) Why is that? Storage is normally accessible in /mnt/pve and Proxmox created dump and template direcotries there.
  7. A

    vzdump creates tar archive with zero filled files on a nfs storage.

    Hi. vzdump creates tar archive with zero filled files on a nfs storage. It was working few a pair of weeks ago, but after some time it started to behave like this. Backup to a local storage is ok. The only thing I remember was changed is the pve-kernel. NFS export /srv/data/backup/proxmox...
  8. A

    Processes die in CTs with Holy Crap error in dmesg

    I do a suspend backup every second day, but it dies eg 12 hours after it. And I didn't have this problem when I used row OpenVZ on Centos 6. Most of processes respawn, but mono dies completely. Weird.
  9. A

    Processes die in CTs with Holy Crap error in dmesg

    Hi, from time to time some processes die in different OpenVZ CTs and I see this errors in dmesg: Holy Crap 1 0 127951,14376(zabbix_agentd) Holy Crap 1 0 147864,13443(zabbix_agentd) Holy Crap 1 0 122716,12179(spectrum) Holy Crap 1 0 109199,4327(zabbix_agentd) Holy Crap 1 0 7867,594(apache2) Holy...
  10. A

    Can't migrate stopped HA CT

    I use external raid connected to both hosts by FC. Then I have cLVM on it and GFS2 mounted on both hosts. DRBD would unnecessary double diskspace usage. Everything works stable except live migration.
  11. A

    Can't migrate stopped HA CT

    I do want a running CT with HA, but not during a migration. The reason is quite simple. I use OpenVZ over GFS2. I know it is not supported by Proxmox, but it works flawless until live migration is used (because of a bug in OpenVZ, it fails to make a checkpoint on GFS2). The same problem occurs...
  12. A

    Can't migrate stopped HA CT

    Thank you for your reply. -d parameter is good for resetting of the "failed" state, but mentioned error doesn't break the service. -e parameter will start the service and it will start the CT. But I need to migrate it when it is stopped (not running). When I just shut it down, HA will start it...
  13. A

    Can't migrate stopped HA CT

    Hi. When I try to migrate stopped CT from webgui I get this error: Executing HA migrate for CT 1000 to node prox1 Trying to migrate pvevm:1000 to prox1...Temporary failure; try again TASK ERROR: command 'clusvcadm -M pvevm:1000 -m prox1' failed: exit code 250 When I try to do it manually with...
  14. A

    Failover domain failback relocate instead of live migration

    Is it possible to use relocation instead of live migration for failbacking in a failover domain? Just to stop it and run on a failback node?
  15. A

    Failover domain failback doesn't work.

    This is it. After removing the first heuristic rule it tries to migrate it back. Thank you.
  16. A

    Failover domain failback doesn't work.

    <?xml version="1.0"?> <cluster config_version="12" name="stcluster"> <cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey" /> <quorumd allow_kill="0" interval="3" label="proxmox_qdisk" tko="10"> <heuristic interval="3" program="ping $GATEWAY -c1 -w1" score="1"...
  17. A

    Failover domain failback doesn't work.

    Hi. I've set up a failover domain for VMs. When node1 crashes VMs are relocated to node2. It works fine (it'd worked before I've created failover domain). But when node1 recovers, VMs are not relocated back to it. <rm> <pvevm autostart="1" vmid="1000" domain="vmdomain"/> <pvevm...
  18. A

    HA resource agent: Unable to open /dev/vzctl

    https://bugzilla.proxmox.com/show_bug.cgi?id=354