Ceph Firefly (0.80) is released. Is PVE working with this version ?
yes, check http://pve.proxmox.com/wiki/Ceph_Server#Installation_of_Ceph_packages
Ceph Firefly (0.80) is released. Is PVE working with this version ?
apt-get update
apt-get install xfsprogs
deb http://ftp.us.debian.org/debian wheezy main contrib
deb http://security.debian.org/ wheezy/updates main contrib
# wheezy-updates, previously known as 'volatile'
deb http://ftp.us.debian.org/debian/ wheezy-updates main
deb [arch=amd64] http://download.proxmox.com/debian wheezy pve-no-subscription
I read everywhere that SAS is supposed to be better and faster than SATA.
I have 3 servers filed with SATA drives for OSD and also for boot. On the 4th server, I filled it with SAS drives.
After creation of OSDs for all 4 servers, on the "OSD" section it show all the SAS drives on the 4th server have latency.
I also notice that during the install, boot, and OSD creation, the SATA seems to be faster than the SAS.
Am I not supposed to mix SATA and SAS drives for CEPH?
When I want to add more hard drive, does it have to be identical to the existing?
It is not good idea to mix and match different speed hard drive. SAS certainly faster than SATA. In your case SATA seems faster because your majority drives are SATA and they work together about same speed. CEPH will try to write in all drives equally. So your SAS might be faster but they have to wait for the slower drives to finish before they get their share. You can still mix, but do it eqaully in all nodes. Instead of having all SAS in one node, spread them over 4 nodes. Take out some SATA from other nodes and fill the 4th one. When you want to replace any SATA with SAS, do it in a set of 4. Hope this makes sense.
How many replicas are you using? Whats your PG count?
Thank you very much. It makes a whole lot of sense.
I have 3 replicas. PG count is 1024 (16 x 2TB HD)
If I want to add more hard drive in the future, can I use 4TB-6TB drives or am I stuck with 2TB drives to match with the current? It would be silly if the drives we add to the cepth nodes has to match the current capacity. Please advise. Thank you for your help.
Now I want to add ISO and installation images. Where do I go to upload them? I've tried to upload but keep running to an error. I can't find any documentation show me how to do it.
Hello everyone,
1) Is there any detailed documentation for Promox and Ceph?
I thought it was also necessary to have at least one MDS? Could you modify your steps to have this added if I am correct? Thanks, SergeSure you can add any sizes HDD later in future. It doesnt have to 2TB always. What i was saying, Since you have 3 replicas on 3 nodes, to balance writing you should replace in a set of 3 HDDs. If you replacing a 2TB with 4TB, try to replace 3 TBs with 3 4TBs. Note that it "DOES NOT" have to be that way. You can mix and match any sizes, CEPH will automatically set weight based on their capacity, but by using same sets it just gives you good balance of writes thus little more performance. You cannot upload ISO on CEPH RBD storage, if that is what you are trying to do. RBD only supports RAW image. If you want to use CEPH to store ISO and other disk images such as qcow2, vmdk then you have to setup CephFS. Here are some simplified steps to create CephFS on CEPH cluster. This steps needs to be done all proxmox nodes that you want to use CephFS: 1. Install ceph-fuse Proxmox node: #apt-get install ceph-fuse 2. Create a separate pool: #ceph osd pool create 512 512 3. Create a mount folder on Proxmox nodes: #mkdir /mnt/cephfs or 4. #cd /etc/pve 5. #ceph-fuse /mnt/ -o nonempty 6. Goto Proxmox GUI and add the storage as local directory: /mnt/ and you will see you can use any image types. To umount a CephFS simply run: #fusermount -u /mnt/ To mount the CephFS during Proxmox reboot automatically add this to /etc/fstab: #DEVICE PATH TYPE OPTIONS id=admin,conf=/etc/pve/ceph.conf /mnt/ fuse.ceph defaults 0 0 That is it for creating CephFS. Keep in mind that, CephFS is not considered as "Production Ready" yet. But i have been using it for last 11 months without issue. I use it primarily to store ISO, templates and other test VMs with qcow2. All my production VMs are on RBD, so even if CephFS crashes it wont be big loss. Hope this helps. [/COLOR]