Search results

  1. A

    LDAP User refresh sync parameter ?

    is there a way to set a refresh time ? (we 're using PMG with action to reject user when not in LDAP user ) The question is when and how to set up this ? It gave me some : 550: 5.1.1 <xxxx@domain.com>: Recipient address rejected: undeliverable address: Recipient address lookup failed...
  2. A

    ZFS RAIDZ2 : how to add another disk for extend zpool space ?

    thank for your answer , but with this method you need twice the capacity of your original zpool ??? (a thing you can't do with one server)
  3. A

    ZFS RAIDZ2 : how to add another disk for extend zpool space ?

    before doing that ... and choosing ZFS (actually RAIDZ2 4 * 1 TB) as file system for my cluster ... i'm really afraid of scalability of ZFS ! i've been working since two year with Ceph (it's very scalable, anb easy to manage : thanks proxmox !!!), i'm trying ZFs (ceph perf. are not very good...
  4. A

    ZFS raidz2 ... how to remove device ?

    i can acces the server now but i will answer asap, thank you for your help.
  5. A

    Replace disk in ZFS Pool

    Can we really REALLY know which hard drive is used or not by promox, linux and ZFS ? root@node2:~# ls -lah /dev/disk/by-id total 0 drwxr-xr-x 2 root root 940 May 6 23:42 . drwxr-xr-x 8 root root 160 May 6 23:32 .. lrwxrwxrwx 1 root root 9 May 6 23:32...
  6. A

    Replace disk in ZFS Pool

    i'm on the same problem , this is really awfull and painfull to manage ! Disk are /dev/sdx But promox use /dev/disk/by-id ... And zfs use another label .... another idea for adding another layer ? KISS ? root@tankster2:~# zpool status zPool1 pool: zPool1 state: DEGRADED status: One or more...
  7. A

    ZFS raidz2 ... how to remove device ?

    NAME STATE READ WRITE CKSUM zPool1 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 12898261183420457887 UNAVAIL 0 0 0 was /dev/disk/by-id/wwn-0x50014ee2b016fa00-part1...
  8. A

    Change Host ip management in order to have VLAN tagged on it ?

    Is this type of config supported by promox ? (host management on VLAN)
  9. A

    Change Host ip management in order to have VLAN tagged on it ?

    Is it possible to do that with WEBGUI ? because if i do what it is in the wiki, it gave me a Type : unknow. But VLAN is working perfectly. But i can't access VM or CT console : Why ? is it normal ? i need to isolate proxmox hypervisor from CT, VM (user network) mut not see hypervisor etc (we...
  10. A

    Network traffic between PVE cluster node ?

    in fact i would like to compltely isolate proxmox ve admin network (the only one that can have a gateway ;-)) from user network (pct + vm) , user network go through hardware firewall with one nic , proxmox admin network another one..
  11. A

    Network traffic between PVE cluster node ?

    must i deploy firewall VM (or lxc) on each Node of the cluster in order to isolate lxc each other ? (like in docker) or pve provide tools for doing that ?
  12. A

    Docker / containerd integration in Proxmox : planned ?

    thank you for those answsers. But can docker ,use of /dev/rdb1 (ceph ) device manage by proxmox , is good or not ?
  13. A

    Docker / containerd integration in Proxmox : planned ?

    ok, baremetal + promox + VM + docker But why not : baremetal + promox + docker ? What about performance with VM layer ? thank you
  14. A

    Docker / containerd integration in Proxmox : planned ?

    LXc are good (debian ok, but with centos image ... some issue). Do you planned the containerd , docker integration as for lxc in proxmox VE WebGUI ?
  15. A

    perf. issue with LACP (2+3) : ceph poor performance (with powerfull hardware).

    thkx alwin, i already read this doc. BUT : When ceph and pve are on the same hardware (server) , with 3 nodes : what is the mean of Public network and Cluster network ? Where datas are passing throught ? (really)
  16. A

    perf. issue with LACP (2+3) : ceph poor performance (with powerfull hardware).

    ouchchhhh : i will give it a try , it's seem to be the answer ! For those answers
  17. A

    It is a good network setup : 9 nic (gigabit) ?

    hello aaron, thank you for replying. ok for native linux switch , and go for 128PG (default infact) you read wrong : ceph is on top of LACP (bond) 4 Gbit not 4Gb ( 4 gbit give me 4 x 125MB/s ... 500MB/s for HDD , i think it's enough !!) , and i don't have money for 10Gb switch nor 10Gb nic ...
  18. A

    It is a good network setup : 9 nic (gigabit) ?

    Is it a good setup, 3 questions ... First we have 9 gigabit NIC per node (3 nodes, cluster pve) , is it good for production ? (OSD are 1TB HDD 7200tr/mn, for all we have 18 OSD : is 256 pg is enough or must we stay on 128 ?) - 1 cluster corosync, migration, webgui (pve node) ...
  19. A

    perf. issue with LACP (2+3) : ceph poor performance (with powerfull hardware).

    ... i do a test with [6 CT simultanously doing a dd if= ...] : it seems that Ceph does not use all 3 Gigabit link (but 2 ...) : i can add all bandwith , 174MB/s (177 IOPS writes)