Search results

  1. R

    Proxmox VE Ceph Benchmark 2018/02

    Understood. I shall replace now all HDDs with SSDs and perform again the same test to see if anything significant changes and I'll publish here the results. Do you expect any performance improvement on CEPHFS in any of their future releases? Has anything been put in roadmap yet?
  2. R

    Mounting CEPHFS on a client

    Agreed. As matter of fact we would only use BIND MOUNT with container and Kernel module for all clients running outside ProxMox on bare metal (or as VMs not containers). What is concerning us is that significant difference in performance we are already discussing in the thread mentioned above...
  3. R

    Proxmox VE Ceph Benchmark 2018/02

    That's exactly the point. We need shared filesystems. We are currently comparing CEPHFS performances against GPFS and Nutanix NDFS. That being said, I am afraid I did not understand your point. Both in case a. and b. above we are accessing CEPHFS. Only in case a. the container virtual disk is...
  4. R

    Proxmox VE Ceph Benchmark 2018/02

    Still working on benchmarking CEPHFS, we noted the following different behavior: a. A container with a virtual disk stored on CEPHFS, benchmark running on its local /tmp, bandwidth of approx 450MB/s b. Same container with BIND MOUNT exported by host and benchmark running on this shared folder...
  5. R

    Mounting CEPHFS on a client

    I'll let you know on Monday. I only have access to the cluster for few hours a day. In the meanwhile if you have a way to let me load the module on the host kernel, I would much appreciate it. Best regards Rosario
  6. R

    Mounting CEPHFS on a client

    Already tried. Fails in loading another module. [root@ct-test-02 ceph]# ceph-fuse -d -n client.1 /mnt/ceph/ 2019-08-16 13:00:38.835 7f2adc4c3e00 0 ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable), process ceph-fuse, pid 12815 2019-08-16 13:00:38.859...
  7. R

    Mounting CEPHFS on a client

    Can the module be loaded into the kernel's host prior to start the LXC container?
  8. R

    Mounting CEPHFS on a client

    Client OS mount fails with: [root@ct-test-02 /]# mount -t ceph 10.244.70.35:6789:/shared /mnt/ceph -o name=1,secretfile=/etc/ceph/ceph.client.1.secret failed to load ceph kernel module (1) mount error 1 = Operation not permitted Do you have any idea on why?
  9. R

    Mounting CEPHFS on a client

    I have already answered to your note above and I have never said it was in the RedHat document (I wander from where you interpreted it). I will re-answer for clarity and for future readers: ceph-deploy is mentioned in the official CEPH documentation. RedHat documentation does not need to...
  10. R

    Mounting CEPHFS on a client

    Well, then you should probably alert potential users of the fact that, although you made available CEPHFS, its access from external client is not supported (or, as you say, you didn't want it to be its primary use), rather than stating all over that you now have support to CEPHFS (without...
  11. R

    Mounting CEPHFS on a client

    Because what others document for their OSs start from the assumption that not only the client but also the cluster is built on their OSs. While what I am trying to do is to mount CEPHFS, built on your OS, onto another client OS. Example is, you don't use ceph-deploy, which instead is the...
  12. R

    Mounting CEPHFS on a client

    Forget the other examples, those were just examples. Let me try to be more clear for you. Can you please point me out to your documentation where you show which packages need to be installed on a ceph client in order to mount cephfs?
  13. R

    Mounting CEPHFS on a client

    For the time being, a low-cost solution is: 1. add EPEL repo to CentOS (yum install epel-release), then 2. yum install ceph
  14. R

    Mounting CEPHFS on a client

    Of course not. As I wrote, ceph-deploy is referenced only in the official CEPH documentation (see here). Besides, RedHat has done its job and has created their repository to install all the necessary packages for their CEPHFS clients (Ubuntu, CentOS, etc). What I am asking you is: Which...
  15. R

    Mounting CEPHFS on a client

    I would like to mount CEPHFS on a Client. Since CEPHFS version is Nautilus, I decided to use, as client, a container running CentOS 7. It might have well been an external physical machine, just happened I wanted to try with a container. Yes, CEPHFS is already installed on ProxMox and working...
  16. R

    Lost console access to container after cluster reboot running PVE 6.0

    Done. Same issue No errors. Aug 16 09:29:44 pve-01 pvedaemon[5058]: <root@pam> starting task UPID:pve-01:00002178:00006330:5D565B68:vzstart:101:root@pam: Aug 16 09:30:32 pve-01 pvedaemon[5058]: <root@pam> end task UPID:pve-01:00002178:00006330:5D565B68:vzstart:101:root@pam: OK Aug 16...
  17. R

    Lost console access to container after cluster reboot running PVE 6.0

    Hi Chris, 'pct enter CTID' worked just fine. Still the console through the WEB doesn't. Any log I can extract to let you identify the issue? Best regards Rosario
  18. R

    Lost console access to container after cluster reboot running PVE 6.0

    What's that? Can you please elaborate? I have only created one container. I can create more and then test if this might help to debug.
  19. R

    Lost console access to container after cluster reboot running PVE 6.0

    Yes, I tried also with different browsers (Firefox, IE, Edge, Chrome), different platforms (MacOS, Win 10 En., Fedora) and different Java versions. To be clear, it is only the console for the container. The console for the VM works fine.
  20. R

    Lost console access to container after cluster reboot running PVE 6.0

    Container is still up and running, accessible via IP but console is blank, nothing shown up. Keyboard within console works, pressed keys show but that's it, no prompt, no nothing. Any idea? Thank you Rosario

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!