Yes, as data is spread across multiple OSD's if each OSD has very different performance specifications it will cause some noticeable performance impact, aswel as causing the slow OSD to work harder, and the faster OSD not to be used to their full potential.
So have been testing 5.0 in a small cluster to see how thing's are, these are what I have noticed so far.
1/
When trying to do a live migration between two 5.0 Beta servers in a cluster the following error is displayed:
Mar 27 16:58:51 copying disk images
Mar 27 16:58:51 starting VM 184 on...
This can also be if you have particular hot data that is held on OSD's with these SSD's being the primary, however this will be hard to check.
However you can use some cli commands to dig deeper into the more busy disks to see what is making them busy, but if most of your busy OSD's point to a...
Min 1, Max 2, all depends on where the last 2 copies of data are when you take down 2 nodes.
If you wanted to be able to survive 2 nodes failing out of 4 you would need to have a 4/2 pool.
pveproxy service has just died on the remaining nodes and unable to start the service or dir /etc/pve or run pvecm status all just hang.
If im correct CEPH will not operate without Proxmox wrapper due to CEPH being setup differently than vanilla CEPH.
Hello,
Over the weekend one of our switches had a period of high CPU causing our cluster to flap, once this was resolved we ended up with a form of broken cluster.
All nodes could communicate with each other via the WebGUI if selected directly even though they all showed as red
VM's could not...
So I rolled back the new server to 10.2.5, and the issue has gone away.
Looks like maybe there has been a change in 10.2.6 which no longer works with the above command.
Exactly the same :
pveceph createosd /dev/sdc -journal_dev /dev/sdm1
create OSD on /dev/sdc (xfs)
using device '/dev/sdm1' for journal
ceph-disk: Error: journal specified but not allowed by osd backend
command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid...
Hello,
Since the most recent CEPH Jewel update I am unable to create an OSD, I tried via the GUI and got an error of 1.
I tried via the CLI to see if I got any more information and received the following:
pveceph createosd /dev/sdc -journal_dev /dev/sdm
create OSD on /dev/sdc (xfs)
using...
Did you drill down to the exact folder for the SCSI drivers and make sure it installed the drive.
Which option did you use for the HD-Disk within Proxmox? Virtio or SCSI?
This forum is mainly user to user support, if you have a production enviornment and have an issue that's effecting you then your probably best opening a Support Ticket and working directly with Proxmox.
If not wait for one of the dev team to pop past the forum and offer some guidance.
I have seen this for myself, before you create the OSD in Proxmox create the OSD root folder.
It should then be fine, since the update to Jewel I have noticed quite often Proxmox will fail to create the root OSD folder so when it gets to mounting the OSD it will just fail.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.