Search results

  1. D

    CEPH 17.2.7 - "ceph device ls" is wrong

    Thank you, much appreciated. I couldn't find anything obvious wrong - I suspect something got toggled over time when upgrading from 15 to 16 to 17, and just never got turned back on.
  2. D

    CEPH 17.2.7 - "ceph device ls" is wrong

    ok then..... Not a lot to see there - each device is correctly in the list (at the top) and each node has the right drives.
  3. D

    CEPH 17.2.7 - "ceph device ls" is wrong

    yes to both "ceph osd free" and "pve webui" crushmap is also correct - pmx1 for example. host pmx1 { id -3 # do not change unnecessarily id -13 class ssd # do not change unnecessarily id -2 class hdd # do not change unnecessarily # weight 37.84184 alg...
  4. D

    CEPH 17.2.7 - "ceph device ls" is wrong

    Just ran into this in the lab, haven't gone digging in prod yet. pve-manager/8.1.3/b46aac3b42da5d15 (running kernel: 6.2.16-20-pve) Cluster is alive, working, zero issues, everything in GUI is happy, 100% alive -- however... the "ceph device" table appears to have NOT updated itself for a...
  5. D

    [SOLVED] Very poor LXC network performance

    So pulling this thread in an attempt to fix SMB performance issues under 8.0.x - my samba container has the following values: root@pmx3:~# cat /sys/fs/cgroup/lxc/5209/memory.high 8522825728 root@pmx3:~# cat /sys/fs/cgroup/lxc/5209/ns/memory.high max How would you change that? It's a...
  6. D

    Samba under PVE 8 sometimes extremely slow

    I don't know if this : https://forum.proxmox.com/threads/very-poor-lxc-network-performance.139701/ applies - but if I could figure out what they were suggesting, I would give it a try.
  7. D

    Samba under PVE 8 sometimes extremely slow

    ARRRRRGGGGHHH!! I'm not the only one!! I'm running two different LXC containers (bind mounts) running Samba -- I've been trying to figure out why the performance has not only gone in the toilet since the 8.x upgrade, but seems to randomly STOP for 30 seconds, then restart.. repeatedly. I've...
  8. D

    Another docker experience on Proxmox

    One other note... any new overlay networks you create have to get the same sysctl -w net.ipv4.ip_forward=1 treatment, or they don't work. I've moved the fix.ingress.start script into "/etc/periodic/15min" cron folder (on alpine) That way I create one - just wait a few minutes, and it works...
  9. D

    Another docker experience on Proxmox

    ok, so with that extra work, I appear to have working the following: docker-swarm inside Alpine 3.18 within PRIV LXC containers on Proxmox 8.1.x, with CEPH as the backing file system. This allows me to bind-mount a /CEPH folder in each docker-host, and then use bind mounts in the YAML files to...
  10. D

    Another docker experience on Proxmox

    And then VFS bit me, and I had 16G in /var/lib/docker/vfs/ * https://c-goes.github.io/posts/proxmox-lxc-docker-fuse-overlayfs/ * https://github.com/containers/fuse-overlayfs/releases ON PROXMOX HOST pmx1:~# apt -y install fuse-overlayfs and then add the "FUSE" capability to the container...
  11. D

    Another docker experience on Proxmox

    I did find a more elegant way to set the forwarding, it's absolutely still needed to function -- using this as "/etc/local.d/fix.ingress.start" #!/bin/bash for lp in {1..60};do if exists=$(test -f /run/docker/netns/ingress_sbox) then nsenter...
  12. D

    Another docker experience on Proxmox

    . yes, repeatedly. :( Your guide was the closest I got to success, thank you. (in hindsight, I missed that you had to use PRIV containers as well) In addition I set up alpine, ubuntu, debian, etc, all from scratch, tried the same steps, no love. HOWEVER, I just repeated the exercise, but made...
  13. D

    Another docker experience on Proxmox

    Hey so I've tried everything on every post I can find, and i"m unable to get past the point of getting overlay/ingress networking functional. references: * https://gist.github.com/Drallas/e03eb5a4f68bb526f920a423455bc0c9 *...
  14. D

    Intel gvt-g not working with 6.2

    https://www.reddit.com/r/homelab/comments/jyudnn/enable_mediated_intel_igpu_gvtg_for_vms_in/ that was the most simple set of directions for me.
  15. D

    Intel gvt-g not working with 6.2

    Just wanted to add my 2cents.. I spent the last week fighting to get it enabled, kept getting stuck at the ls "/sys/bus/pci/devices/0000:00:02.0/mdev_supported_types" coming back blank. Turned out to be the same thing that @everwisher ran into, removing the kernel parameter: nomodeset (or...
  16. D

    [SOLVED] Kernel 5.11 & NVIDIA Linux vgpu-kvm

    ^^^ sadly dead -- anyone have a new archive site?
  17. D

    PVE8 - Change to IOMMU/Passthrough?

    <shaking head> While I appreciate the need for technical accuracy, it doesn't advance the problem solving efforts, and borders on snarky. However, because it obviously makes people feel better, I will edit the post to use only the correct names for the products. <cheering> Played with the...
  18. D

    PVE8 - Change to IOMMU/Passthrough?

    Recently upgraded from PMX7 (Proxmox Version 7 or PVE7) to PMX8 (Proxmox Version 8 or PVE8), and I noticed the passthrough doesn't seem to work the same way in the UI it used to. <= PVE7, the way you knew IOMMU/etc wasn't working is that in the UI, the "MAPPED DEVICES" dropdown would be empty...
  19. D

    Recommendation Request: Dual Node Shared Storage

    Thanks, appreciate the clarification - that's kinda where I was thinking as well. I would be taking on most of the support, however, any parts that require commercial licensing (including pmx and/or drbd) would be paid by the customer. I do wish there was a good dual-node model, I think it...
  20. D

    Recommendation Request: Dual Node Shared Storage

    Had a client request a fully redundant dual-node setup, and most of my experience has been either with single node (ZFS FTW) or lots of nodes (CEPH FTW). Neither of those things seem to work well in a dual node fully redundant setup. Here's my thinking, wanted to see what the wisdom of the...