Recent content by Fug1

  1. F

    ceph-osd OOM

    I ran the trim on all PGs in all OSDs where the duplicate entries were greater than 6,000. That seems to have done the trick, my OSDs can now start and my cluster is healthy.
  2. F

    ceph-osd OOM

    Apparently Ceph Quincy has a log entry that suggests running this command if the number of duplicates exceeds 6,000. https://forum.proxmox.com/threads/pve7-2-errors-in-osd-logs-after-upgrade-from-ceph-pacific-to-quincy.117009/
  3. F

    ceph-osd OOM

    Yes, I really need to upgrade but can't do that while the cluster is unhealthy. I tried to go through the process documented on that webpage, but it's lacking some detail. The command: while read pg; do echo $pg; ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-{OSD-ID} --op log...
  4. F

    ceph-osd OOM

    A couple of additional data points: Two of the nodes have 32GB of memory, the other has 64GB of memory. All three nodes are experiencing the ceph-osd OOM issue. osd_memory_target for the OSDs appears to be 4GB ceph config get osd osd_memory_target 4294967296
  5. F

    ceph-osd OOM

    I have a 3-node PVE 7.4-18 cluster running Ceph 15.2.17. There is one OSD per node, so pretty simple. I'm using 3 replicas, so the data should basically be mirrored across all OSDs in the cluster. Everything has been running fine for months, but I've suddenly lost the ability to get my OSDs up...
  6. F

    [SOLVED] PBS with datastore on NFS

    Yes, it's working perfectly for my application. I haven't observed any performance issues.
  7. F

    [SOLVED] PBS with datastore on NFS

    This is just a small home system. NFS performance is sufficient for my use case.
  8. F

    Exclude/Include directories for proxmox-backup-client on PVE node

    I am using proxmox-backup-client to backup my PVE nodes to a PBS repository. I'd like to get to the point where I can recover a PVE node by installing the PVE software and then running a restore from the backup, which will hopefully recover any PVE configurations for my system. Is there a...
  9. F

    [SOLVED] PBS with datastore on NFS

    I solved this by setting no_root_squash on the NFS share. Apparently in FreeNAS/TrueNAS you do this by setting the maproot user to 'root' and the maproot group to 'wheel' on the NFS share. https://www.truenas.com/community/threads/equivalent-for-root_squash-no_root_squash-on-freenas.80024/
  10. F

    [SOLVED] PBS with datastore on NFS

    I know there have been a lot of posts about this, but I'm still struggling. I have a new PBS instance started up running 3.2-2. I have a FreeNAS/TrueNAS server with an NFS share set up for my backups. I have mounted the NFS share using /etc/fstab under /mnt/backups in PBS. I have verified that...
  11. F

    Shared application storage via NFS

    I have a Proxmox 6.x home environment, and I'm using Ceph on my Proxmox cluster for storage of VM and LXC containers. On a separate server I have FreeNAS for my bulk storage, which I share out using NFS. Right now, I have the NFS mounts configured in Proxmox under /etc/fstab, with mount points...
  12. F

    Can't use LXC snapshots while a Bind Mount is setup

    It would be great if we could get this functionality. I have all my containers on RBD now, but can't snapshot them unless we can get this functionality with bind mounts. For me, the data within the bind mount is on ZFS with snapshots already...so it doesn't need to be included in the container...
  13. F

    No network after upgrade to Proxmox 6

    The journalctl further up in the thread should still be valid, and I'm also attaching a new one here. I can't see anything obvious. Might be udev related, but I don't know enough about that to troubleshoot further. Yes, there is a running DHCP server on the network but none of the IP addresses...
  14. F

    No network after upgrade to Proxmox 6

    I still seem to be having this problem after a reboot, but it's a bit sporadic. When the network fails to come up on a node, I have to manually start it with `ifup vmbr0`.
  15. F

    No network after upgrade to Proxmox 6

    No, I was playing around with the IPMI and network boot settings. I've changed every combination back to see if I could reproduce, but unfortunately I can't.