Search results

  1. L

    [SOLVED] How to remove old mds from ceph? (actually slow mds message)

    <bump>. Someone must have run into this issue before? Maybe it's just an annoyance and doesn't affect the cluster, but then again, maybe it does. I'd really like remove that.
  2. L

    [SOLVED] How to remove old mds from ceph? (actually slow mds message)

    I had a failed node, which I replaced, but the MDS (for cephfs) that was on that node is still reported in the GUI as slow. How can I remove that? It's not in ceph.conf or storage.conf MDS_SLOW_METADATA_IO 1 MDSs report slow metadata IOs mdssm1(mds.0): 6 slow metadata IOs are blocked > 30 secs...
  3. L

    How to force quorum if the 3rd monitor is down

    I have a situation where a node failed (due to the boot drive failing) and then another node failed (due to RAM failure). There are 7 nodes in the cluster, so things kept running, but eventually there were many writes that could not be redundantly stored and the whole thing ground to a halt...
  4. L

    Replace failed boot drive without trashing ceph OSD's?

    I have a failed boot drive in a 7 node proxmox cluster with ceph. If I replace the drive and do a fresh install, I would need to trash the OSD's attached to that node. If I could somehow recover the OSD's instead it would be great and probably save time too. Is that possible?
  5. L

    Fileserver with backup

    PBS as a VM is definitely a good idea. However, PBS generates a fingerprint that you need to save, otherwise another instance won't be able to read to backups. You can attach storage to PBS in many ways. On the underlying OS (Debian) you can mount the storage and simply link to it in the...
  6. L

    Ceph RBD features for Proxmox

    I have found that fast-diff is very useful, which requires exclusive-lock and object-map to be enabled as well. While the selection of features at RBD image create time is nicely documented, how to modify an existing volume is not easy to find. I wanted to enable fast-diff on images to I can...
  7. L

    [SOLVED] How can i install the ceph dashboard in proxmox 6

    Ah, I found the solution. https://forum.proxmox.com/threads/ceph-dashboard-not-working-after-update-to-proxmox-7-from-6-4.104911/post-498277
  8. L

    [SOLVED] How can i install the ceph dashboard in proxmox 6

    It's been a while, but it seems that all is not well with newer versions of ceph mgr and this... I get this: FT1-NodeA:~# apt-get install ceph-mgr-dashboard Reading package lists... Done Building dependency tree... Done Reading state information... Done ceph-mgr-dashboard is already the newest...
  9. L

    Fileserver with backup

    You should really look into Proxmox Backup server. It will take the pain out of backups. I configured some older metal into a proxmox ceph cluster, run PBS in a VM and it's works really well. If you want to just protect yourself against user errors or similar, use snapshots. They're much...
  10. L

    [BUG] Network is only working selectively, can't see why

    This morning a restart of a node that had not been restarted for quite some time caused the same symptoms as those reported here. It dawned on me that this might be due to a new running kernel that this swapping of ports occurs. On further investigation, here's what I found. NodeA was running...
  11. L

    ceph thin provisioning for lxc's not working as expected?

    :redface: Of course, the command has to run on the node on which the container is running...! ~# pct fstrim 192 /var/lib/lxc/192/rootfs/: 88.9 GiB (95446147072 bytes) trimmed /var/lib/lxc/192/rootfs/home/user-data/owncloud: 1.6 TiB (1795599138816 bytes) trimmed However, when I ask rbd for the...
  12. L

    ceph thin provisioning for lxc's not working as expected?

    I don't think it's a good idea to run privileged containers for clients, not? If a UID matches one of the host's UIDs that has rights to locations a client should not have access to, it may create a big problem...
  13. L

    ceph thin provisioning for lxc's not working as expected?

    Does it mean that if you have a mountpoint (over and above the boot drive), thin-provisioning doesn't work? ~# cat /etc/pve/lxc/192.conf arch: amd64 cores: 4 features: nesting=1 hostname: productive memory: 8192 nameserver: 8.8.8.8 net0...
  14. L

    ceph thin provisioning for lxc's not working as expected?

    Of course that gives the same result. For some reason the container believes that the storage doesn't support trimming, i.e. it's not thin provisioned. However, some other volumes on the same ceph storage pool are completely ok with trimming. Could there be something that's set in the...
  15. L

    ceph thin provisioning for lxc's not working as expected?

    The response is: fstrim: /: FITRIM ioctl failed: Operation not permitted This is Ubuntu 22.04 running a ceph storage cluster. Why is this?
  16. L

    ceph thin provisioning for lxc's not working as expected?

    I have an LXC that is provisioned with a 100GB boot drive using ceph RBD storage. However, see the following: ~# df -h Filesystem Size Used Avail Use% Mounted on /dev/rbd10 98G 8.8G 85G 10% / This is in the running container. Checking the disk usage in ceph however, claims...
  17. L

    [SOLVED] LXC with more cores assigned uses dramatically less CPU. Why?

    I know the matter has been resolved, but just for reference, here's what I was referring to: ~# uptime 14:17:19 up 21 days, 19:44, 1 user, load average: 5.37, 5.36, 5.55 This refers to CPU's, not percentages.
  18. L

    How to fence off ceph monitor processes?

    I the continuous process of learning about running an pmx environment with ceph, I came across a note regarding ceph performance: "... if running in shared environments, fence off monitor processes." Can someone explain what is meant by this and how does one achieve this? thanks!
  19. L

    [SOLVED] LXC with more cores assigned uses dramatically less CPU. Why?

    Then it must be changed to say percentage use. There is an enhancement request open for this.
  20. L

    [SOLVED] LXC with more cores assigned uses dramatically less CPU. Why?

    40 CPU's, number 40 in the graph. Not percentages. Go to a VM however: 6 vCPU's assigned, the graphs shows 7. Why on earth would anyone think that this is a percentage graph?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!