i think you are correct here is my actual size
zfs get used pve-blade-108-internal-data
NAME PROPERTY VALUE SOURCE
pve-blade-108-internal-data used 78.0G -
78 used when allocating 70 ill try to extend it and move it again once the server become available
i try to move the lxc root disk from local to ceph (have over 50% free space on ceph)
this is the log for the error:
/dev/rbd3
Creating filesystem with 18350080 4k blocks and 4587520 inodes
Filesystem UUID: e55036f9-7f8a-4a49-af36-7929f96043cd
Superblock backups stored on blocks:
32768...
i just installed fresh pbs server on supermicro 1027R-72BRFTP
it have internal 2208 controller ,
i have 2 dedicated sata drives(boot mirror) connected directly to the motherboard.
and 2 sas ssd (8TB) drives connected to the front panel.
but the proxmox does not see the two sas drives. but...
finally i revived the hardware for our first BBS, and before installing i would like to ask some questions to know what is the best approach to meet our requirements
backup lxc\vms (that that pbs is designed to )
1. what is the best method to do :
nfs share to store vm backups (hyper-v...
Right I forgot .
Is it a good idea to make multiple ceph agents with small amount of osds. In this case each will have up to 6.
Or better to get server with better capacity. (More hdds sleds )
i planning to add some more nodes to the grid ( we need mainly more computational power cpu\ram )
but i thought to add an HDD ceph storage for low accsess\archive storage (we have existing 5 node ssd based servers for to support heavy read tasks )
i am thinking to do the following 3x...
I asked similar question around a year ago but i did not find it so ill ask it here again.
Our system:
proxmox cluster based on 6.3-2 10 node,
ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week )
we plan to add more...
i just got rid of the old jewel clients and run the commands
but it did not make any change see the image: one of the osd is very full and once it got fuller the ceph got frozen
ceph balancer status
{
"last_optimize_duration": "0:00:00.005535",
"plans": [],
"mode": "upmap"...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.