Search results

  1. I

    hardware ceph question

    The plane is to use HDD (not ssd, so 6gbs should be good enoth) for ceph storage. so for 6gbs can i use the onboard controller to work with ceph?
  2. I

    hardware ceph question

    do i need another lsi card? or the onboard is good enough ? i plan one of the following models 6027TR-DTRF+ 6027TR-D71RF+ 6027TR-D70RF+
  3. I

    hardware ceph question

    Right I forgot . Is it a good idea to make multiple ceph agents with small amount of osds. In this case each will have up to 6. Or better to get server with better capacity. (More hdds sleds )
  4. I

    hardware ceph question

    i planning to add some more nodes to the grid ( we need mainly more computational power cpu\ram ) but i thought to add an HDD ceph storage for low accsess\archive storage (we have existing 5 node ssd based servers for to support heavy read tasks ) i am thinking to do the following 3x...
  5. I

    nfs share from lxc

    eventually , we managed to make it work: we set some lxc.apparmor settings
  6. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    Thanks. Once ill install all the new hardware ill post here the commands i am planning to do for a short review.
  7. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    is it safe to do on a running ceph cluster?
  8. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  9. I

    ceph rebalance osd

    i just got rid of the old jewel clients and run the commands but it did not make any change see the image: one of the osd is very full and once it got fuller the ceph got frozen ceph balancer status { "last_optimize_duration": "0:00:00.005535", "plans": [], "mode": "upmap"...
  10. I

    proxmox 6.3 + lxc+ nfs share

    yes, i would like to run the nfs server inside the container, to enable another system to access it
  11. I

    proxmox 6.3 + lxc+ nfs share

    bump again, i still need some help :)
  12. I

    proxmox 6.3 + lxc+ nfs share

    i found the reason, it is available only for root user.. and i was with another user ( but with root permissions ) Turned NFS feature on, and rebooted the machine, now have this error: root@nfs-intenral:~# systemctl restart nfs-kernel-server A dependency job for nfs-server.service failed. See...
  13. I

    proxmox 6.3 + lxc+ nfs share

    no nfs checkbox, and i cannot edit Features
  14. I

    proxmox 6.3 + lxc+ nfs share

    Unprivileged container = NO, what to do to enable the NFS?
  15. I

    proxmox 6.3 + lxc+ nfs share

    i would like to share nfs folder , the lxc is ubuntu 18.04 i tried but every time i receive an error: root@nfs-intenral:~# journalctl -xe -- The result is RESULT. Jan 12 11:21:13 nfs-intenral systemd[1]: rpc-svcgssd.service: Job rpc-svcgssd.service/start failed with result 'dependency'. Jan 12...
  16. I

    [SOLVED] Ceph - slow Recovery/ Rebalance on fast sas ssd

    just wanted to make sure that i did not misconfigure something thanks
  17. I

    [SOLVED] Ceph - slow Recovery/ Rebalance on fast sas ssd

    on under load we have around 8-10GBS read throughput (but most of our files are large and our iops are under 2k all the time) ( we dont have anoth cpus to consume the entire read bandwidth ( not yet :) ) created a pool 128 3\2 replica and here is the results: root@pve-srv2:~# echo 3 | tee...
  18. I

    [SOLVED] can i expand\adding storage

    now i am planning to buy a new server dedicated to pbs, to backup around 50-100 vms we keep growing, and i would like to add storage on demand.. can i easily add storage to the main backup pool?