Recent content by lifeboy

  1. How to force quorum if the 3rd monitor is down

    I have a situation where a node failed (due to the boot drive failing) and then another node failed (due to RAM failure). There are 7 nodes in the cluster, so things kept running, but eventually there were many writes that could not be redundantly stored and the whole thing ground to a halt...
  2. Replace failed boot drive without trashing ceph OSD's?

    I have a failed boot drive in a 7 node proxmox cluster with ceph. If I replace the drive and do a fresh install, I would need to trash the OSD's attached to that node. If I could somehow recover the OSD's instead it would be great and probably save time too. Is that possible?
  3. Fileserver with backup

    PBS as a VM is definitely a good idea. However, PBS generates a fingerprint that you need to save, otherwise another instance won't be able to read to backups. You can attach storage to PBS in many ways. On the underlying OS (Debian) you can mount the storage and simply link to it in the...
  4. Ceph RBD features for Proxmox

    I have found that fast-diff is very useful, which requires exclusive-lock and object-map to be enabled as well. While the selection of features at RBD image create time is nicely documented, how to modify an existing volume is not easy to find. I wanted to enable fast-diff on images to I can...
  5. [SOLVED] How can i install the ceph dashboard in proxmox 6

    Ah, I found the solution. https://forum.proxmox.com/threads/ceph-dashboard-not-working-after-update-to-proxmox-7-from-6-4.104911/post-498277
  6. [SOLVED] How can i install the ceph dashboard in proxmox 6

    It's been a while, but it seems that all is not well with newer versions of ceph mgr and this... I get this: FT1-NodeA:~# apt-get install ceph-mgr-dashboard Reading package lists... Done Building dependency tree... Done Reading state information... Done ceph-mgr-dashboard is already the newest...
  7. Fileserver with backup

    You should really look into Proxmox Backup server. It will take the pain out of backups. I configured some older metal into a proxmox ceph cluster, run PBS in a VM and it's works really well. If you want to just protect yourself against user errors or similar, use snapshots. They're much...
  8. [BUG] Network is only working selectively, can't see why

    This morning a restart of a node that had not been restarted for quite some time caused the same symptoms as those reported here. It dawned on me that this might be due to a new running kernel that this swapping of ports occurs. On further investigation, here's what I found. NodeA was running...
  9. ceph thin provisioning for lxc's not working as expected?

    :redface: Of course, the command has to run on the node on which the container is running...! ~# pct fstrim 192 /var/lib/lxc/192/rootfs/: 88.9 GiB (95446147072 bytes) trimmed /var/lib/lxc/192/rootfs/home/user-data/owncloud: 1.6 TiB (1795599138816 bytes) trimmed However, when I ask rbd for the...
  10. ceph thin provisioning for lxc's not working as expected?

    I don't think it's a good idea to run privileged containers for clients, not? If a UID matches one of the host's UIDs that has rights to locations a client should not have access to, it may create a big problem...
  11. ceph thin provisioning for lxc's not working as expected?

    Does it mean that if you have a mountpoint (over and above the boot drive), thin-provisioning doesn't work? ~# cat /etc/pve/lxc/192.conf arch: amd64 cores: 4 features: nesting=1 hostname: productive memory: 8192 nameserver: 8.8.8.8 net0...
  12. ceph thin provisioning for lxc's not working as expected?

    Of course that gives the same result. For some reason the container believes that the storage doesn't support trimming, i.e. it's not thin provisioned. However, some other volumes on the same ceph storage pool are completely ok with trimming. Could there be something that's set in the...
  13. ceph thin provisioning for lxc's not working as expected?

    The response is: fstrim: /: FITRIM ioctl failed: Operation not permitted This is Ubuntu 22.04 running a ceph storage cluster. Why is this?
  14. ceph thin provisioning for lxc's not working as expected?

    I have an LXC that is provisioned with a 100GB boot drive using ceph RBD storage. However, see the following: ~# df -h Filesystem Size Used Avail Use% Mounted on /dev/rbd10 98G 8.8G 85G 10% / This is in the running container. Checking the disk usage in ceph however, claims...
  15. [SOLVED] LXC with more cores assigned uses dramatically less CPU. Why?

    I know the matter has been resolved, but just for reference, here's what I was referring to: ~# uptime 14:17:19 up 21 days, 19:44, 1 user, load average: 5.37, 5.36, 5.55 This refers to CPU's, not percentages.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!