Search results

  1. S

    Bandwidth limit a move disk procedure ?

    Is it possible to bandwidth limit the moving disk from one storage to another? My source storage is sata and everytime i move a disk it causes serious io problems in other vms hosted there.
  2. S

    Adding - Deleting HA kvm fails half nodes

    Yes i always do when i make changes by hand in the cluster.conf that cant be done with gui. Mostly i use proxmox gui for add/remove HA vm, that does it automatically.
  3. S

    Adding - Deleting HA kvm fails half nodes

    Hello, I have a working cluster of 7 nodes. I have enabled HA for around 20 kvms. Everything works good until recently i tried to add HA to a kvm and i noticed strange logs. All nodes report pveversion proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve) pve-manager: 3.3-1 (running...
  4. S

    [SOLVED] Disk move problem

    I upgraded the package. Stop-Start the vm and then moved disk without clicking delete source. It worked. I then rm the old disk manually. Thank you
  5. S

    [SOLVED] Disk move problem

    # info version 2.1.0 root@node4:~# pveversion -v proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve) pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73) pve-kernel-2.6.32-32-pve: 2.6.32-136 pve-kernel-2.6.32-29-pve: 2.6.32-126 pve-kernel-2.6.32-31-pve: 2.6.32-132 lvm2: 2.02.98-pve4...
  6. S

    [SOLVED] Disk move problem

    I am starting lately to move a lot of images from NAS to ceph pools. I have problem with one image that failled to move. Move disk log create full clone of drive virtio0 (NAS_3:600/vm-600-disk-1.qcow2) 2014-10-22 19:15:41.421050 7fc674087760 -1 did not load config file, using default settings...
  7. S

    Windows KVM frequent restarts

    Investigating the dump files showed me memory related problems. All the dumps from the restarts had bug check string = PAGE_FAULT_IN_NONPAGED_AREA caused by win32k.sys driver. Kvm.conf has fixed memory at 16384MB Type 'help' for help. # info balloon balloon: actual=16384 max_mem=16384 i had...
  8. S

    MIgrate KVMs to ceph cluster

    I cant believe i missed that. Thanks.
  9. S

    MIgrate KVMs to ceph cluster

    I am testing my new ceph cluster and i try to find the best way to migrate machines in there. The simplest is to backup the kvm and restore it at ceph pool. Another way is to convert the images, output directly to ceph pool and then edit the conf. qemu-img convert -p -O rbd...
  10. S

    Proxmox VE Ceph Server released (beta)

    Thank you for the answers, This the question i am mostly seeking an answer for. In a scenario when 1 node die complety in a 3 node cluster with 3 times replication and the ceph pools where almost full before the dissaster will the remaining OSDs have space for rebalance? I believe no...
  11. S

    Proxmox VE Ceph Server released (beta)

    I am planing a 3 node ceph-proxmox cluster. I try to find what is better for my set up according to replication times. I am not sure about one thing. Each node will have 4 osds, 4 x 4 TB SATA disks means a total 48TB ceph cluster. If i will use 3 times replication From that the usable will...
  12. S

    Windows KVM frequent restarts

    A KVM with windows 2008 R2 has frequent unwanted reboots. I noticed that reboots happen when cpu load starts to grow. bootdisk: ide0 cores: 4 cpu: qemu64 ide2: none,media=cdrom memory: 16384 net0: e1000=8E:52:D9:AD:33:E5,bridge=vmbr0,rate=125 onboot: 1 ostype: win7 sockets: 1 virtio0...
  13. S

    A lot of HA problems

    After all i stopped problematic kvms. Restarted rgmannager and everything went good. It was the last option i had and this i did. All is perfect again. No problems at all. I am testing now the different network configurations you propose. If i delete auto vmbr0:1 and change vmrb0 as you...
  14. S

    A lot of HA problems

    These are quite general questions that i dont seem to be relevant. So host files are all the same so i can join nodes without the need of reboot each time this is a sample. 10.0.0.1 node1 to 10.0.0.33 node33 is the same in all nodes. # Do not remove the following line, or various programs #...
  15. S

    A lot of HA problems

    At least can anyone help why some nodes get the updates and some no?
  16. S

    A lot of HA problems

    I have passwords there and server info i dont want to reveal. I should say i skipped these lines. Fencing works like a charm when is activated. Let me add also some log for no5 issue. When i add or delete a HA service and activate the new conf node2 and node4 Aug 1 13:15:47 node2...
  17. S

    A lot of HA problems

    After a lot of reboots, change confs, random fenced nodes and deleting-reinstalling nodes i came up with a 4 nodes cluster with HA issues. I would love to avoid any other stopping and starting of kvms. Let me count the problems. 1. clustat at nodes show different results. (i removed ids where...
  18. S

    MIgrate cluster from broadcast to multicast

    I changed the conf to multicast. Edited my network-firewall-routes confs for mcast domain 224.0.0.1/4 Stopped all kvm. Stopped rgmanager Removed nodes from fence domain Rebooted each one node Eventualy i gainned quorum Auto boot kvms started Althought i have some HA problems I consider this solved.
  19. S

    MIgrate cluster from broadcast to multicast

    I know, but the time i created the cluster our datacenter didnt support multicast at our private network. Now it does i think its time to change it back.
  20. S

    MIgrate cluster from broadcast to multicast

    I have a working cluster using broadcast for cluster communication. This method, plus the default cluster conf, and the number of 7 nodes probably lead to many problems. Nodes get fenced with no particular reason and ha vms get restarted frequently and randomly. problems like these 2 posts...