Search results

  1. Mount no longer works in Proxmox 6 - NFS - Synology

    actually, here are a number of links, proving it's a problem with ubuntu kernel after all. this problem is present in ubuntu 19.10 or even earlier: https://forum.kodi.tv/showthread.php?tid=349370 https://github.com/sahlberg/libnfs/issues/294...
  2. Mount no longer works in Proxmox 6 - NFS - Synology

    Hi guys, I'm in the same boat but I believe my conditions are a bit different. Maybe this can help: I have the latest Proxmox v6.1-8 and a very old IBM v7000 Unified storage. The storage works well with several dozens machines so there's no problem with it. But it only supports NFSv2 and V3! I...
  3. New pool in ceph - without touching the old one

    Understood. Will I be able to change the default pool to work with the new rule or will I have to create a new pool and transfer all the osds over? One more question : On a summary view of a ceph pool I see that the TOTAL space sometimes grows a bit (from 3.44TB to 3.55TB for example ) - how...
  4. New pool in ceph - without touching the old one

    Hi again, so I tested this and this is what I got: I created a new device class "ssd2", set up a rule to assign ssd2 to osds. disabled the automatic assignment of the device class. Then I created a new pool with this rule. It was assigned OK, all the newly created osds were ssd2. But then it...
  5. New pool in ceph - without touching the old one

    Great, thanks! So let me just confirm one last time. I can actually create only one crush rule (for ssd2, for example) and attach the new osds to it, and I don't have to create any rules for the old existing pool, right? Sorry for asking the same thing over and over - I just don't want...
  6. New pool in ceph - without touching the old one

    In this order? I would think that first I should create the rule, then pool and then osds, otherwise the osds get attached to the old pool. I just want to create the new pool without touching the old one, I mean I don't want to create any crush rules for the old pool and definitely don't want...
  7. New pool in ceph - without touching the old one

    Excellent, thanks What about the other questions? Especially the last one.
  8. New pool in ceph - without touching the old one

    Hi, I have a 4 node proxmox cluster with ceph. The version is 5.3. I have 1 pool with 16 OSDs - SSD. Now I added several nodes to the cluster and I want the new OSDs be a part of a different pool. I understand that this can only be done via device class settings: ceph osd crush rule...
  9. Ceph raw usage grows by itself

    Hi, Thanks a lot for such a detailed response. I'd like to clarify some things though. I couldn't find anywhere the numbers you mentioned: the default size of rocksdb is 1GB and WAL - 500MB. Can you please direct me to these? If I have a total of 12 OSDs in cluster, will I be right to...
  10. Ceph raw usage grows by itself

    Hi, I have a new cluster of 4 nodes, 3 of them have ceph. root@pve3:~# pveversion -v proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve) pve-manager: 5.1-35 (running version: 5.1-35/722cc488) pve-kernel-4.13.4-1-pve: 4.13.4-25 libpve-http-server-perl: 2.0-6 lvm2: 2.02.168-pve6 corosync...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!