I got to a big scare when that happened I never used containers on a ceph pool again, I instead used a NFS share for my containers and Ceph for my VM’s. I havent run into an issue again. I also stopped using containers as well because they can bring a Node down totally, I have to reboot a Node...
I am not in a position to reboot the node yet because it is in production and no space on the other nodes. I am planning to add another node in the following week. I was just wondering if there is not a service I can reboot that might resolve this as my current pool utilization is at 90%. I...
I am trying to add additional OSD's to my cluster, but it is not being created. I do not get any errors, after the createosd command it runs through everything and stops/freeze at "The operation has completed successfully" See below.
After that "Ceph OSD sdc - Create" just runs under...
Is there a way where I can get a list of disk drives of VM's on a specific OSD?
Currently I am running the following.
ceph osd map SSD vm-100-disk-0
But that is when I already know the disk name.
I want to find out which VM is hogging DIsk IO on the OSD that is currently slugish.
I am configuring Ceph on a 3 Node PVE Cluster with mixed SSD and HDD. I am currently creating a ruleset with specific class - following https://pve.proxmox.com/pve-docs/chapter-pveceph.html
I need to specify a failure-domain but I am not sure what to set it to.
Will someone be able to...
Yes, it happened to two VM's whos hard disk I resized. Sorry I mixed the screenshots up. Here is 2043.
I am certain that all my nodes can access the ceph cluster as I have a running VM that is using the same RBD pool on Node4.
I changed the storage to KRBD in the morning and later that afternoon I made the resize.
This is what syslog says
Dec 5 15:55:28 node6 pvedaemon: <root@pam> update VM 2043: resize --disk sata0 --size +100G
Dec 5 15:55:28 node6 kernel: [8669263.134425] rbd1: detected capacity change...
I have a 7 Node Cluster with CEPH - RBD. I have a lot of existing VM's running on the cluster and decided to try out containers. I noticed that KRBD needs to be enabled to do that. I went and tick the KRBD box and added "Container" to content together with "Disk Image" in RBD. I then...
I purchased 10GB TN9510 cards to use for my Ceph network, but cannot seem to get them working.
dmesg shows the following.
root@proxmox1:~# dmesg | grep tn40xx
[ 6.444671] tn40xx: Tehuti Network Driver, 0.3.6.15
[ 6.444740] tn40xx: Supported phys : QT2025 TLK10232 AQR105 MUSTANG...