Search results

  1. A

    Your GW is mark spam, not hide internal host

    what can we do in order to make this mail gateway : professional one ?
  2. A

    Cluster conf not correctly replicated ...

    re install solve the problem ... cool !
  3. A

    Cluster conf not correctly replicated ...

    i just change from master to slave (with one command) and now : whan i log into secondary pmg conf is not replicated , do u have any idea ?
  4. A

    CEPH 3 nodes Cluster : data size available ?

    i think i will do this setup : 2 SSD 120GB in raid1( mdadm) ==> with proxmox + iso and template. 2 HDD 500GB in raid 1 too (mdadm is my friends) in "LVM-Thin" ==> VM + LXC With 3 nodes ==> 1500GB of space free, and good hardware fault tolerance.
  5. A

    CEPH 3 nodes Cluster : data size available ?

    one OSD = 500GB, so 2 x 3 OSD = 6 OSD (total cluster ) = 3000GB. === size 3 === 500 * 0.8 = 400 GB 400 * 6 = 2400 GB (80% fill of OSD) 2400 / 3 = 800 GB (space after replication) 800 / 3 = 266 GB (space per host) 266 * 2 = 533 GB (data that needs to be moved after 1x nodes fail) 800 - 533 = 266...
  6. A

    CEPH 3 nodes Cluster : data size available ?

    Hello, i planned to install a 3 nodes Proxmox cluster. With 1 SSD (120Gb) and 2 HDD 500Go per node. i want data (HDD x2) to be available if i lost one node ... Which technology is better for me CEPH RDB or (i test it but do not understand data size calculation and how it work ) GlusterFS...
  7. A

    Shared-nothing Architecture with proxmox ?

    i really like raid1 with mdadm ... very easy to repair and maintain ... (i have only two , HDD per node) So i don't know if i must choose glustefFS , cephfs or stay with LVM-THIN (so local FS) but without cold migration possibility
  8. A

    Shared-nothing Architecture with proxmox ?

    i read hyper-converged... so, is my hardware, with cephfs (or rbd), a descent solution ? will ceph kill my 16 GB node, with core i5 cpu ? ... this schema will be fine ? according to you ?
  9. A

    Shared-nothing Architecture with proxmox ?

    Here u can see the architecture i want to build.
  10. A

    Fresh install ZFS (RAID1), can't boot after reboot

    Trying to install on HP pro 3500 with uefi disabled , it seems proxmox install can't install grub on disk ! partition scheme ... with ZFS RADI1 install : 1MB biosgrub 539 MB B 499.6GB zfs ... on sda and sdb but cant' boot ...
  11. A

    Shared-nothing Architecture with proxmox ?

    i think i will subscribe to your product, many goodies since 2009 ! (first truy of proxmox). The aim is to build internet service provider architecture (very little , ok ;-)). i have constraint with hardware. i don't want to deploy many and many nodes for redundancy ! (begins with 6 or 8)...
  12. A

    Shared-nothing Architecture with proxmox ?

    Do u think it's possible or not ? (why)
  13. A

    Replicate default Proxmox LVM configuration when installing from Debian

    i think you can found something interesting here : https://pve.proxmox.com/wiki/Storage:_LVM_Thin
  14. A

    Fresh install ZFS (RAID1), can't boot after reboot

    Hello, i've just installing Proxomox 5.3 on HP core i5 3470 16 Go , 2 x HDD 500 Go Choosing ZFS RAID1 for this two drive, in order to have RAID 1 (i hope that ZFS , in RAID1 mode will not eat all my 16 Go memory ...). Installing is ok, but when the computer reboot .... tadadadddadaa : PXE boot...