Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

Since Ceph Luminous was integrated with Proxmox 5.0 can we assume that the jump from Proxmox 4.4 to Proxmox 5.1 will be as safe and simple as from 4.4 to 5.0? We have a 10 node cluster with 5 nodes being Ceph Jewel and were simply waiting for Luminous 12.2 which the Ceph Team labels as the official production ready release but Proxmox has a note on the Ceph Jewel to Luminous webpage saying to wait until Proxmox 5.1

That's the reason for the question. I have done upgrades before that were very problematic after waiting months to finally go to the next Proxmox version (3.x to 4.1) and don't want to have to deal with big issues or caveats without knowing ahead of time.

Thanks
 
Will there be CephFS storage plugin in future roadmap features to store vzdump, iso, and container template?

Using the cephfs fuse client in fstab with Proxmox storage directory plugin sometimes fails when Proxmox already created the /mnt/pve/mountdir first before cephfs fuse mounts the cephfs.
 
Using the cephfs fuse client in fstab with Proxmox storage directory plugin sometimes fails when Proxmox already created the /mnt/pve/mountdir first before cephfs fuse mounts the cephfs.
you can add a 'is_mountpoint yes' to /etc/pve/storage.cfg for the directory storage,
pve will then check if there is a filesystem mounted there, and only then do its thing
 
  • Like
Reactions: janos and chrone
Since Ceph Luminous was integrated with Proxmox 5.0 can we assume that the jump from Proxmox 4.4 to Proxmox 5.1 will be as safe and simple as from 4.4 to 5.0? We have a 10 node cluster with 5 nodes being Ceph Jewel and were simply waiting for Luminous 12.2 which the Ceph Team labels as the official production ready release but Proxmox has a note on the Ceph Jewel to Luminous webpage saying to wait until Proxmox 5.1

That's the reason for the question. I have done upgrades before that were very problematic after waiting months to finally go to the next Proxmox version (3.x to 4.1) and don't want to have to deal with big issues or caveats without knowing ahead of time.

Thanks

yes, upgrading from latest 4.4 to 5.1 will be possible and if anything, work even better than to 5.0 because of more fixed bugs ;).
 
  • Like
Reactions: gkovacs and chrone
you can add a 'is_mountpoint yes' to /etc/pve/storage.cfg for the directory storage,
pve will then check if there is a filesystem mounted there, and only then do its thing

Oh this is great. Hope this fix the racing condition between Proxmox and fstab. Thanks! :)
 
a first preview of the 4.13 based kernel is available in pvetest.

either manually download it (pve-kernel-4.13.3-1-pve_4.13.3-2_amd64.deb - SHA256 2b41d8a23d61af1317f9e2e5a6d60538bb5a4aa92ae5f8642595447836a4b698), or if you have a test system with the pvetest repository configured, install using "apt install pve-kernel-4.13.3-1-pve". it will not be automatically pulled in via updates to the proxmox-ve package during this testing phase. there are also updated pve-firmware, pve-headers-4.13.3-1-pve and linux-tools-4.13 packages on pvetest.

feedback (both positive and negative) welcome ;)
 
  • Like
Reactions: chrone
Ceph 12.2.1 released. Can you update packages in test repository?

updated (but please don't double-post - it does not make us work faster, but slows us down if we have to wade through duplicates).
 
When i try to create ceph monitor i hve error - unable to open file '/etc/pve/ceph.conf.tmp.3455' - Permission denied
 
When i try to create ceph monitor i hve error - unable to open file '/etc/pve/ceph.conf.tmp.3455' - Permission denied

that means your PVE cluster is not quorate (check "pvecm status"). but please open a new thread for trouble-shooting such issues!
 
updated (but please don't double-post - it does not make us work faster, but slows us down if we have to wade through duplicates).

@fabian I'm just in conference with mellanox, we found a bug in 12.2.1 (also present in previous versions) they will push this today to 12.2.1 master.

Code:
The bug occurs as gid_idx is not initialized in a Port object.
offending messages:

2017-10-02 10:56:03.378774 7fb5f27fc700 20  RDMAConnectedSocketImpl
activate transition to RTR state successfully.
/build/ceph-12.2.1/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: In
function 'void RDMAConnectedSocketImpl::handle_connection()' thread
7fb5f2ffd700 time 2017-10-02 10:56:03.378791
/build/ceph-12.2.1/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: 221:
FAILED assert(!r)
2017-10-02 10:56:03.378774 7fb5f2ffd700 20  RDMAConnectedSocketImpl
activate transition to RTR state successfully.
2017-10-02 10:56:03.378778 7fb5f2ffd700 -1  RDMAConnectedSocketImpl
activate failed to transition to RTS state: (22) Invalid argument
2017-10-02 10:56:03.378777 7fb5f27fc700 -1  RDMAConnectedSocketImpl
activate failed to transition to RTS state: (22) Invalid argument
 ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous
(stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x102) [0x7fb5fe2c8a72]
 2: (RDMAConnectedSocketImpl::handle_connection()+0xb4a) [0x7fb5fe48491a]
 3: (EventCenter::process_events(int, std::chrono::duration<unsigned
long, std::ratio<1l, 1000000000l> >*)+0xa08) [0x7fb5fe46b0f8]
 4: (()+0x436d88) [0x7fb5fe46fd88]
 5: (()+0xb9e6f) [0x7fb5fc4b2e6f]
 6: (()+0x7494) [0x7fb608b22494]
 7: (clone()+0x3f) [0x7fb607f3faff]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is
needed to interpret this.
Aborted
 
@fabian I'm just in conference with mellanox, we found a bug in 12.2.1 (also present in previous versions) they will push this today to 12.2.1 master.

if you have a PR/commit ID, maybe it is possible to cherry-pick the fix. 12.2.1 is already out the door, and I guess 12.2.2 will still take a while ;)
 
Last edited:
When will You add zfs 0.7.2 ?

The plan is to have a first batch of preview packages available next week, including a patch to ensure compatibility for send/recv with ZFS 0.6.5.
 
if you have a PR/commit ID, maybe it is possible to cherry-pick the fix. 12.2.1 is already out the door, and I guess 12.2.2 will still take a while ;)
@fabian any plans to commit this patch in your 12.2.1 instance ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!