HowTo: Upgrade Ceph Hammer to Jewel

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,896
1,148
273
Az Jewel includes CephFS, it begs the question: will CephFS become a storage plugin in Proxmox, enabling the storage of vzdump backups and qcow2 disk images? Is this feature on the roadmap at all?
 
Az Jewel includes CephFS, it begs the question: will CephFS become a storage plugin in Proxmox, enabling the storage of vzdump backups and qcow2 disk images? Is this feature on the roadmap at all?

it may be included, but it is explicitly marked as not yet ready for production:
http://docs.ceph.com/docs/jewel/cephfs/ said:
Important
CephFS currently lacks a robust ‘fsck’ check and repair function. Please use caution when storing important data as the disaster recovery tools are still under development. For more information about using CephFS today, see CephFS for early adopters

and

http://docs.ceph.com/docs/jewel/cephfs/early-adopters/ said:
This pages provides guidance for early adoption of CephFS by users with an appetite for adventure. While work is ongoing to build the scrubbing and disaster recovery tools needed to run CephFS in demanding production environments, it is already useful for community members to try CephFS and provide bug reports and feedback.

feel free to mount it and use it as directory storage and collect information about what breaks ;) but don't complain if/when stuff breaks, and please don't use it to store your only backups!
 
it may be included, but it is explicitly marked as not yet ready for production
I believe that is a leftover in the user documentation. The Ceph team has been very clear that Jewel makes CephFS a "production ready" part of the release. Specifically, the fsck and recovery tools that are referenced in the item you quoted above most certainly are part of Jewel.

From the Jewel 10.2 release notes:

Ceph 10.2 release notes said:
  • CephFS:
    • This is the first release in which CephFS is declared stable! Several features are disabled by default, including snapshots and multiple active MDS servers.
    • The repair and disaster recovery tools are now feature-complete.
    • A new cephfs-volume-manager module is included that provides a high-level interface for creating “shares” for OpenStack Manila and similar projects.
    • There is now experimental support for multiple CephFS file systems within a single cluster.
 
  • Like
Reactions: gkovacs
Using cephfs for storing backups sounds like a great idea, I'd suggest anyone contemplating this to address these backup concerns:
  1. Don't use the same ceph cluster for storing your only source of backups. When your ceph cluster is fubar so are your backups.
  2. Protect your backups from hackers. If a hacker gains access to your Proxmox host and he/she deleted your VMs and your backups how will you restore? [1]
    1. Don't rely on unmounting or password protecting your backup mounts as a means of protecting your backups
    2. Hacker can hide in wait for you to mount your backups then delete the backups
  3. Protect your backups from physical disasters. Thiefs[3], fires[2], tornadoes, hurricanes, floods[4], sunami, asteroids, nukes, etc happen. How will you restore when these things happen?
  4. Periodically test your backups, nothing worse than finding out that those backups you thought you had are just corrupted, useless garbage.
I'll leave you with my sad attempt at poetry:
Your backups must be tested
So you know they work as expected
Offline if best
So you can rest
When disaster strikes unexpected

  1. http://www.networkcomputing.com/cloud-infrastructure/code-spaces-lesson-cloud-backup/314805651
  2. http://www.technewsworld.com/story/80333.html
  3. http://royal.pingdom.com/2008/07/18/forget-about-hacking-your-servers-might-get-stolen/
  4. https://ciip.wordpress.com/2009/09/13/vodafone-turkey-woes/
 
  • Like
Reactions: Dan Nicolae
Is the following setup possible with Ceph Jewel?

4 Nodes in Proxmox Cluster/Ceph Cluster:

2 Storage nodes, running some testing VMs as well:
--> 2 Nodes (128GB RAM, Octa Core) with 13 OSDs, each 6TB, MONs on same Disks

2 VM dedicated nodes
--> 2 Nodes (265GB RAM, Octa Core) with no OSDs but MONs running on local SSD (GPT) storage

This leaves us with 4 MONs, 2 of them with SSD and 26 OSDs split on 2 storage nodes.

- All nodes have additional 10Gbit network cards dedicated for ceph and cluster communication (via VLANs).
- Public communication runs via 1Gbit network cards.

If I understand Ceph data redundancy (replica) right it would work to set it to 2 instead of the default 3, if one of the storage nodes goes down, Ceph should still be running, even if load is higher, right?

How many OSDs can be missing/damaged on each node before cluster crash if replica is set to 2 ?
How many OSDs can be missing/damaged on each node before cluster crash if replica is set to 3 ?

Thanks for your help!
 
Is the following setup possible with Ceph Jewel?

4 Nodes in Proxmox Cluster/Ceph Cluster:

2 Storage nodes, running some testing VMs as well:
--> 2 Nodes (128GB RAM, Octa Core) with 13 OSDs, each 6TB, MONs on same Disks

2 VM dedicated nodes
--> 2 Nodes (265GB RAM, Octa Core) with no OSDs but MONs running on local SSD (GPT) storage

This leaves us with 4 MONs, 2 of them with SSD and 26 OSDs split on 2 storage nodes.

- All nodes have additional 10Gbit network cards dedicated for ceph and cluster communication (via VLANs).
- Public communication runs via 1Gbit network cards.

If I understand Ceph data redundancy (replica) right it would work to set it to 2 instead of the default 3, if one of the storage nodes goes down, Ceph should still be running, even if load is higher, right?

How many OSDs can be missing/damaged on each node before cluster crash if replica is set to 2 ?
How many OSDs can be missing/damaged on each node before cluster crash if replica is set to 3 ?

Thanks for your help!
Hi,
you should open a new thread about this question.
In short - it's not an good idea. For Mons you need three only; but you should also use minimum three Nodes for OSDs.

An replica of 2 is dangerous - if two osds on different nodes die you will have data loss and due this IO-blocking.

Udo
 
Hello, I just updated my ceph to Jewel by https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel
but at the last step I got an ERROR:
Code:
ceph osd set require_jewel_osds
Error EPERM: not all up OSDs have EPH_FEATURE_SERVER_JEWEL feature

Ceph status is: HEALTH_OK
All features are worked, I hope
Ceph --version: ceph version 10.2.5

that would indicate that you still have hammer OSDs running..
 
How can I fix it

have you stopped the OSDs before upgrading and started them again afterwards (like the howto says)? if not, you might need to restart them (one by one, and check the status / logs inbetween!)
 
Fix it, but i get a new issue.
When I restart node, HA VM migrate but unable to start:
Code:
Configuration file 'nodes/h1/qemu-server/101.conf' does not exist

I saw, when I rebooted node OSDs provided on this node still up
 
Last edited:
Please open a new Thread.
Your post is not relevant to this Thread.
 
After upgrading one of my cluster nodes to Jewel and latest / greatest apt update / distupgrade I get the following when trying to create a new VM:

error writing header: (38) Function not implemented
TASK ERROR: create failed - rbd create vm-7890-disk-1' error: error writing header: (38) Function not implemented
 
After upgrading one of my cluster nodes to Jewel and latest / greatest apt update / distupgrade I get the following when trying to create a new VM:

error writing header: (38) Function not implemented
TASK ERROR: create failed - rbd create vm-7890-disk-1' error: error writing header: (38) Function not implemented

Bump.. Anyone resolve this issue w/ Jewel 10.2.5?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!