Search results

  1. F

    Support for Windows failover clustering

    Thank you for your support. We don't use Windows Failover cluster directly, but we need to setup a cluster for one of our customers. I will take a look at blockbridge, but i don't know if is possible to add servers and license costs :( We only need less than 8Tbyte of usable space, but with the...
  2. F

    Support for Windows failover clustering

    Hello. Have you found some solution for that ? PS: i need to do the same, but with ceph and without a FC SAN.
  3. F

    Problem with SDN QinQ and Emulex ethernet controllers

    Hello. I have a cluster with Dell R720/R730 with 4x10Gbit ethernet (single LACP bond with vlan for management and ceph) and sdn with 802.1q and VLAN zones, connected with cisco nexus switches With sdn QinQ, network is not working (I have a monitor port, and no any arp traffic is received). With...
  4. F

    PVE 8 continously /var/log/ifupdown2/network_config* folders

    Hello. I have a strange problem with my cluster: - 6 x dell R730 nodes with 4x10Gbit ethernet - 2 x Cisco Nexus 3000 series Networking is with a single 4 port LACP bond, with 2 vlans (for ceph and management), and SDN with VLAN and QinQ zones. Only with node 4, 802.1q guests can't communicate...
  5. F

    PVE 2.3test - Unresolved issues

    I think that I have found the problem. When the startup of the VM fails, I see in /var/log/daemon.log this command line: -drive 'file=/mnt/mfsclusterISO/template/iso/Windows7.iso,if=none,id=drive-ide0,media=cdrom,aio=native,cache=none' So, I tried to startup a VM with the disk image in the...
  6. F

    PVE 2.3test - Unresolved issues

    Moosefs has a client with fuse so i can have a local filesystem like nfs. It . worked fine until this last upgrade
  7. F

    PVE 2.3test - Unresolved issues

    I see... But not I can't migrate VM's anymore... what will you do now in proxmox ?
  8. F

    PVE 2.3test - Unresolved issues

    Hello there. I still have some problems with the last pvetest repositary: - I am using a local mounted shared directory for ISO repositary: if I attach an iso file to a VM, it doesn't start because it can't access the file. No problem if I use a local storage. - I can't live migrate VM's...
  9. F

    QEMU 1.4, Ceph RBD support (pvetest)

    I have some upgrade: The first error (the missing function in Storage.pm) is my error. I have upgraded with "apt-get upgrade" and not with "apt-get dist-upgrade", so some library was not upgraded. Now I am upgrading all the nodes and try to resolve the live-migration issue. Sorry....
  10. F

    QEMU 1.4, Ceph RBD support (pvetest)

    There is another problem: live migration from an updated server to another (not still updated, but with the previous 2.3test), fails with: Feb 25 17:30:30 starting migration of VM 100 to node 'nodo02' (172.16.20.32) Feb 25 17:30:30 copying disk images Feb 25 17:30:30 starting VM 100 on remote...
  11. F

    QEMU 1.4, Ceph RBD support (pvetest)

    There is another problem. I have upgraded two hosts of the cluster, migrating all the VM's to the last host. Now I am trying to migrate back the VM to upgrade the last host, but I have an error: Feb 25 17:30:30 starting migration of VM 100 to node 'nodo02' (172.16.20.32) Feb 25 17:30:30...
  12. F

    QEMU 1.4, Ceph RBD support (pvetest)

    I addedd on Storage.pm: sub volume_is_base { my ($cfg, $volid) = @_; my ($sid, $volname) = parse_volume_id($volid, 1); return 0 if !$sid; if (my $scfg = $cfg->{ids}->{$sid}) { my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); my ($vtype, $name, $vmid...
  13. F

    QEMU 1.4, Ceph RBD support (pvetest)

    I confirm: the problem is line 4481 of QemuServer.pm: if (PVE::Storage::volume_is_base($storecfg, $volid)){ but in Storage.pm there is not a "volume_is_base" function Thanks, Fabrizio root@nodo01:/usr/share/perl5/PVE# pveversion -v pve-manager: 2.3-10 (pve-manager/2.3/499c7b4d)...
  14. F

    PVE 2.3 Ceph multiple disk images

    I think you have found the problem :) Thanks, Fabrizio
  15. F

    PVE 2.3 Ceph multiple disk images

    I think that the problem is with the rbd image format. Works with the virtual machine with "format type 1" disk image (the default format); doesn't work with the virtual machine with "format type 2" disk image (that supports cloning). I have created the first machine with the 2.2 PVE...
  16. F

    PVE 2.3 Ceph multiple disk images

    [root@ceph-1 ~]# rbd ls -l NAME SIZE PARENT FMT PROT LOCK vm-102-disk-1 120G 1 vm-102-disk-2 1024G 1 vm-104-disk-1 81920M 1 vm-105-disk-1 61440M 1 vm-110-disk-1 1024G 1 vm-104-disk-2 32768M 2 vm-104-disk-3 36864M...
  17. F

    PVE 2.3 Ceph multiple disk images

    I also noticed this differences with disk image names: [root@ceph-1 ~]# rados --pool=rbd ls | grep vm-10 vm-104-disk-1.rbd vm-102-disk-1.rbd rbd_id.vm-104-disk-2 vm-102-disk-2.rbd rbd_id.vm-106-disk-1 rbd_id.vm-104-disk-3 rbd_id.vm-108-disk-1 rbd_id.vm-107-disk-1 vm-105-disk-1.rbd
  18. F

    PVE 2.3 Ceph multiple disk images

    Hello. Thank's for your reply. This is my configuration: - 3 x ceph nodes (as kvm virtual machines, one per host, using all local space on separate disks); this is the version: [root@ceph-1 ~]# rpm -q ceph ceph-0.56.2-0.el6.x86_64 - 3 x proxmox hosts, with this version...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!