I saw on the wiki ( http://pve.proxmox.com/pve-docs/pve-admin-guide.html#_configuration_files_2 ) that i can configure network migration :
migration: [type=]<secure|insecure> [,network=<CIDR>]
migration: type=insecure, network=10.10.10.0/24
is it good ?
I've got 2 Dell R730 with 2 10Gb network cards. I just noticed that after 4.4.x kernel upgrade, the server can't reboot.
I have "A start job is running for LSB: Raise network interfaces. (15min 10s / no limit)" blocking message during the shutdown serveur.
Has someone already experience...
I have 2 cluster proxmox.
Fist one have ceph storage in the cluster and i want to connect the second proxmox cluster to the first proxmox cluster's ceph volume.
On the second cluster, connexion is OK, i see the good size of the cluster, but i don't see the fisrt cluster's vm disk
A Few questions :
What is the Ceph official supported version on Proxmox now ?
On my Proxmox Cluster, i have ceph 0.72.2 and i want to upgrade to firefly. Is there anyone who had updated Ceph version ?
Must i upgrade Ceph alone or Proxmox will do it for me ?
Sorry for my english.
Since last update, windows 7 installation cannot find any hard drive (virtio, ide, sata or scsi).
It want drivers location and don't find any hard disk.
Any ideas ?
My pveversion :
root@proxmox20130622:~# pveversion -v
proxmox-ve-2.6.32: 3.2-129 (running kernel...
I'm testing ceph in Proxmox ... Very very nice !
I have 2 questions :
1) On my tests, some deleted disks still appear in my rbd volume. How can i delete them (they are no longer in my vm conf) and why it has not been deleted ?
2) What the pveceph purge command does ?
Thanks to all
I have a virtual volume Glusterfs replicated on 2 gluster server but in the storage.cfg i see only one server IP, is it normal ?
My storage.cfg :
I am trying to connect Proxmox 3.1 (last update) to glusterfs server (3.4) via the Gui and i have an error :
Aug 29 17:13:59 proxmox01 pvedaemon: WARNING: unable to activate storage 'Gluster' - directory '/mnt/pve/Gluster' does not exist
Aug 29 17:13:59 proxmox01 pvestatd...
I just noticed that Proxmox (3.0 and 2.3 since last updates) no longer went up the NFS links (after NFS server reboot for example)
Has Someone found it the same ?
My pveversion :
root@proxmox04:~# pveversion -v
pve-manager: 3.0-20 (pve-manager/3.0/0428106c)
Before, thanks for job !
I've got this warning many times on my test Proxmox cluster :
May 30 17:03:08 proxmox01 pveproxy: WARNING: #011(in cleanup) Can't call method "_put_session" on an undefined value at /usr/lib/perl5/AnyEvent/Handle.pm line 2163 during global...
My choice is done, it will be Ceph.
So i'm doing many tests and i have found that the restore were very slow.
Apparently, all disk size is restored even if the disk is empty.
Is there a solution ?
Re me with my strange questions ... :confused:
On the wiki, i have seen that :
Can we connect Proxmox to a remote sheepdog cluster volume with (like gluster) many IP adresses ?
Something like that for...
New question ...:rolleyes:
I am looking for the "perfect solution" :cool: (Yes Santa Claus exist ...)
After looking NAS/SAN soluces, i am studiing distributed file system (Ceph & Gluster).
So my question :
What system do you prefer with Proxmox and why ? ... if I can ask ... :rolleyes:
It will be probably a stupid question but I haven't find the response.
In the wiki ( http://pve.proxmox.com/wiki/Storage:_Ceph ), you said we need 3 servers minimum.
- Why 3 servers ?
- Are the 3 servers need to be the same ?
- Can i have only 2 big ceph storage (mirroring) ?
Sorry for this...
I am trying to discover the distributed filesystem (Ceph and Sheepdog).
Sheepdog seems to be easier to configure (Y have not tested performances yet).
Will you integrate Sheepdog in the GUI or only Ceph ?
I am testing different soluce so 1 question to all :
What is for you the best open source SAN for iSCSi storages + HA ?
Openmediavault, Nas4free, Openfiler, other ?
All of these have the HA possibilities.
I just noticed that I could not start VM with HD on iscsi/lvm storage + the -fda argument.
The command :
/usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password...