Ah yes, IT-Mode, that was the name of it and the mode they are in.
OSD 0 is now osd 9, but here is just a sample and should be similiar with the other two nodes. two mx100s and 1 mx300
root@stor1:/etc/pve# hdparm -I /dev/sda | grep Model
Model Number: Crucial_CT512MX100SSD1...
yes, but a recovery hurts performance with so little osd's. Anyway, they are Crucial MX100 and MX300's. They are connected to raid controllers (LSI SAS1068e) that can act as HBAs (required specific firmware, was a a few years ago), so full access is given to the disk.
1) Is there a way to test my ceph.conf before its applied to make sure my settings are valid for the version im using?
2) i am currently using cephx auth and i would like to disable that and set it to none. Is there a way to do that on the fly?
Thanks for that. Unfortunately I cant run those tests on drives that are already in "production" as they wipe the drive. That is very interesting to know though. If all of my drives are consumer and mostly the same model, why would i be getting such drastic differences on that perf test for just...
I have 3 ceph storage nodes with only 3 ssd's each for storage. They are only on sata II links, so they max out at about 141MB/s. I am fine with that, but I have 1 osd on each node that has absolutely awful performance and i have no idea why. Seems to be osd.0, osd.3, osd.4 that are just awful...
Seems in the gui I can view the disk io of a guest, but I want to be able to see an overview of all my guests and how much disk io each one is using at that time. Seems there are totals, but that isnt that useful. If its available in the gui per guest, how can i get that from the CLI from a...
BTW, why does this have to be done manually? Why cant apt-get find them properly when doing upgrades? It shouldnt require manual intervention every time.
Why do we have the following entry?
root@stor2:/etc/apt/apt.conf.d# cat /etc/apt/apt.conf.d/75pveconf
APT
{
NeverAutoRemove
{
"^pve-kernel-.*";
};
}
I dont want to have to manually remove old kernels and there is a stock NeverAutoRemove entry that already keeps the newest kernels...
Why is it when I update the proxmox kernels, it doesnt automatically update/find its required support files as well? Its irritating that i have to manually install them every time there is an update with apt-get.
root@stor2:~# apt-get dist-upgrade -y
Reading package lists... Done
Building...
It hasnt worked well for me. Not sure if its an NFS thing or what, but it never seems to finish. Maybe I just limit its loop to one so that it only tries it out on one small vm first.
How has this worked out for you? How are you creating the rbd snapshots in the first place? Right now I only have a single ceph pool, so definitely looking for way to do more efficient backups.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.