Search results

  1. B

    Test : Proxmox4-ceph-network dedicated...

    hello, root@pve-ceph1:~# pveversion -v proxmox-ve: 4.0-3 (running kernel: 3.19.8-1-pve) pve-manager: 4.0-24 (running version: 4.0-24/946af136) pve-kernel-3.19.8-1-pve: 3.19.8-3 lvm2: 2.02.116-pve1 corosync-pve: 2.3.4-2 libqb0: 0.17.1-3 pve-cluster: 4.0-14 qemu-server: 4.0-13 pve-firmware...
  2. B

    Test : Proxmox4-ceph-network dedicated...

    Hello, I test the CEPH installation on my cluster proxmox 4.0 béta. Installation problem of network dedicated to ceph : root@pve-ceph1:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:d0:cd:17:2d UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX...
  3. B

    [SOLVED] debian jessie kvm installation

    Hi mir, perhaps this news will make you change your mind :-) http://blog.nixpanic.net/2015/05/glusterfs-370-has-been-released.html I use glusterfs. Merci. Moula.
  4. B

    Ceph Failed initialization

    I was talking about ping the network dedicated to ceph. So ping each node to check its operation . Did you format and put the disks dedicated to ceph on each node ? A disc dedicated to the "journal" is advised. node1# ping 10.10.10.1 node1# ping 10.10.10.2 node1# ping 10.10.10.3 If your...
  5. B

    Ceph Failed initialization

    If you have created a dedicated network " Ceph " check it first runs from one node to another. # ping 10.10.0.1 # ping 10.10.0.2 # ping 10.10.0.3
  6. B

    Ceph Failed initialization

    Have you followed this procedure ? http://pve.proxmox.com/wiki/Ceph_Server if this is the case, given some information such as: # ceph service status # ceph health ... so that we can help you
  7. B

    PVE 3.3 and pve-kernel.3.10.0-7-pve from prettiest fails to find VG pie on boot

    http://forum.proxmox.com/threads/21001-bug-pve-kernel3-10-0-7-pve-don-t-boot-e1000e
  8. B

    bug? : pve-kernel3.10.0.-7-pve don't boot - e1000e

    Thank's Udo it work's. Merci beaucoup.
  9. B

    pvetest updates and RC ISO installer

    Hi Udo look this link : http://forum.proxmox.com/threads/21001-bug-pve-kernel3-10-0-7-pve-don-t-boot-e1000e some help :-) thank's.
  10. B

    bug? : pve-kernel3.10.0.-7-pve don't boot - e1000e

    My server does not boot with the kernel: pve-kernel-3.10.0.7-pve. While the 2.6.32 kernel boot fine. The message: e1000e: The NVM is Checksun Not valid. Volume group "PVE" not found with the command: cat / proc / modules . . e1000e 268 333 0 - Live...
  11. B

    Corosync Error

    With 3 nodes, to use H.A you must use : FENCING http://pve.proxmox.com/wiki/Fencing
  12. B

    pvetest updates and RC ISO installer

    I Just want to confirme what Alexandre said : -ceph : osd daemons are not always starting at boot. (maybe related to /etc/pve and pve-cluster service ?) I have to restart it several times, but not all OSD
  13. B

    proxmox 3.3 not booting after installation but installation is success full.

    download the iso 3.4 rc1 install it and try to boot a gain perhaps it's a pb with EFI.
  14. B

    VM freezes for a few minutes after migration and gets time offset

    Upgrade your system : # vim /etc/apt/sources.list add : deb http://download.proxmox.com/debian wheezy pvetest # apt-get update && apt-get -y dist-upgrade && apt-get install -y pve-kernel-3.10.0.7-pve reboot your node with this kernel if you don't use openvz. Merci.
  15. B

    pvetest updates and RC ISO installer

    I moved one of my data center with ceph and glusterfs as NFS, Even the Ceph is faster between servers. The kernel 3.10.0-7 still not boot on the ASUS KGPE-D16!!!! Thnk you very much.
  16. B

    NFS over internet

    Why not a cluster storage with another ip wan? A nother project sds, compatible s3 : http://www.skylable.com/ bye.
  17. B

    Proxmox VE 3.3 released!

    In pvetest repository #apt-get install pve-kernel-3.10.0.6-pve
  18. B

    Two Factor Authentication using U2F

    Why you do not use another external server like project : freeipa.org to do that? thank's
  19. B

    RGManager don't start

    After that, and # ccs_config_validate -v -f /etc/pve/cluster.conf.new go to the GUI : H.A ------> ACTIVE. Reboot the nodes. Merci.