Search results

  1. I

    error move lxc root disk from local to ceph

    i think you are correct here is my actual size zfs get used pve-blade-108-internal-data NAME PROPERTY VALUE SOURCE pve-blade-108-internal-data used 78.0G - 78 used when allocating 70 ill try to extend it and move it again once the server become available
  2. I

    error move lxc root disk from local to ceph

    zfspool: pve-blade-108-internal-data pool pve-blade-108-internal-data content rootdir,images nodes pve-blade-108
  3. I

    error move lxc root disk from local to ceph

    perhaps , do you know how i can check it?
  4. I

    error move lxc root disk from local to ceph

    arch: amd64 cores: 96 hostname: svr-ub-108 memory: 401408 mp1: <hiddedn> mp2: <hiddedn> mp3: <hiddedn> mp4: <hiddedn> mp5: <hiddedn> mp6: <hiddedn> mp7: <hiddedn> mp8: <hiddedn> net0: name=eth0,bridge=vmbr0,gw=<hiddedn>,hwaddr=<hiddedn>,ip=<hiddedn>,type=veth net1...
  5. I

    error move lxc root disk from local to ceph

    i try to move the lxc root disk from local to ceph (have over 50% free space on ceph) this is the log for the error: /dev/rbd3 Creating filesystem with 18350080 4k blocks and 4587520 inodes Filesystem UUID: e55036f9-7f8a-4a49-af36-7929f96043cd Superblock backups stored on blocks: 32768...
  6. I

    [SOLVED] install issu with lsi 2208

    i just installed external 3008lsi in IT more and it works.
  7. I

    [SOLVED] install issu with lsi 2208

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 20G 0 disk |-sda1 8:1 0 1007K 0 part |-sda2 8:2 0 512M 0 part `-sda3 8:3 0 19.5G 0 part |-pbs-swap 253:0 0 2.4G 0 lvm [SWAP] `-pbs-root...
  8. I

    [SOLVED] install issu with lsi 2208

    i just installed fresh pbs server on supermicro 1027R-72BRFTP it have internal 2208 controller , i have 2 dedicated sata drives(boot mirror) connected directly to the motherboard. and 2 sas ssd (8TB) drives connected to the front panel. but the proxmox does not see the two sas drives. but...
  9. I

    proxmox backup replication? (second twin off site)

    finally i revived the hardware for our first BBS, and before installing i would like to ask some questions to know what is the best approach to meet our requirements backup lxc\vms (that that pbs is designed to ) 1. what is the best method to do : nfs share to store vm backups (hyper-v...
  10. I

    proxmox 6.3 + lxc+ nfs share

    i made a separate container for docker and nfs (to reduce and share load across the cluster )
  11. I

    proxmox 6.3 + lxc+ nfs share

    check my other post https://forum.proxmox.com/threads/nfs-share-from-lxc.65158/#post-385775
  12. I

    hardware ceph question

    The plane is to use HDD (not ssd, so 6gbs should be good enoth) for ceph storage. so for 6gbs can i use the onboard controller to work with ceph?
  13. I

    hardware ceph question

    do i need another lsi card? or the onboard is good enough ? i plan one of the following models 6027TR-DTRF+ 6027TR-D71RF+ 6027TR-D70RF+
  14. I

    hardware ceph question

    Right I forgot . Is it a good idea to make multiple ceph agents with small amount of osds. In this case each will have up to 6. Or better to get server with better capacity. (More hdds sleds )
  15. I

    hardware ceph question

    i planning to add some more nodes to the grid ( we need mainly more computational power cpu\ram ) but i thought to add an HDD ceph storage for low accsess\archive storage (we have existing 5 node ssd based servers for to support heavy read tasks ) i am thinking to do the following 3x...
  16. I

    nfs share from lxc

    eventually , we managed to make it work: we set some lxc.apparmor settings
  17. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    Thanks. Once ill install all the new hardware ill post here the commands i am planning to do for a short review.
  18. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    is it safe to do on a running ceph cluster?
  19. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  20. I

    ceph rebalance osd

    i just got rid of the old jewel clients and run the commands but it did not make any change see the image: one of the osd is very full and once it got fuller the ceph got frozen ceph balancer status { "last_optimize_duration": "0:00:00.005535", "plans": [], "mode": "upmap"...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!