Search results

  1. D

    Docker to PVE-LXC conversion steps/tool?

    Amazing, thank you, I'll give it a shot.
  2. D

    Docker to PVE-LXC conversion steps/tool?

    Beautiful representation of his contributions. Would make a good framed picture for his wall. Not sure, I didn't use his script for Frigate. I just stood up a new LXC container, jumped through the hoops to pass iGPU / Coral through, then stood up docker inside and ran Frigate. Important...
  3. D

    Docker to PVE-LXC conversion steps/tool?

    As a side note, because it needs attention, ttech, who created the fantastic helper scripts I've used repeatedly, and reference above, passed away Nov 2024. https://github.com/community-scripts/ProxmoxVE/discussions/237 If you liked his scripts, used them, consider buying a coffee, dropping a...
  4. D

    Docker to PVE-LXC conversion steps/tool?

    10 months after my initial post, I have everything moved to LXC containers, and I'm 99% happy.. EXCEPT... Frigate is still running in docker in an LXC container, and while it's "thinner" than a KVM instance + Docker, I'd still love to be able to drop it back to native LXC, lose docker, at some...
  5. D

    [SOLVED] Intel NIC e1000e hardware unit hang

    Same hardware as @the_Uli (NUC10i7FN) Linux pve16 6.5.13-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.13-6 (2024-07-26T12:34Z) x86_64 GNU/Linux Still happening, still only seeing it happen on hosts with active KVM instances. root@pve16:~# journalctl -n 100 -g "Detected Hardware Unit Hang" Oct 03...
  6. D

    [SOLVED] Intel NIC e1000e hardware unit hang

    Starting to see this on a couple of NUC's after upgrading from 6.2 kernel to 6.5 kernel. Already setting TX/RX to 4096. Sep 18 12:22:54 pve15 kernel: e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang: TDH <d7f>...
  7. D

    Modifying Shutdown Sequence (Stop LXC's prior to Networking)

    I've had better luck using: pct shutdown %x --forcestop 1 --timeout 60" then doing whatever reboot/shutdown I need to do
  8. D

    Best practise sharing NFS to proxmox cluster?

    This approach has multiple disadvantages, not the least of which is performance reductions, since the bind mounts all share the host NFS mount (in addition to LACP-bundles being single-member utilized). Anyone figure out how to fix the io deadlock issues? EDIT: One potential note -- I was...
  9. D

    Modifying Shutdown Sequence (Stop LXC's prior to Networking)

    Fair question, having the same issue myself. Using containers that use NFS mounts inside them, and they refuse to stop or shutdown, have to kill the "lxc-start" to get them to die. Welcome any suggestions.
  10. D

    Can I move a CEPH disk between nodes?

    Didn't work for me, but it's been a while, and the devs are good about sneaking in silent improvements. :)
  11. D

    NFS server over ipv6

    Has this been updated? NFS mounts allowed via IPv6 yet?
  12. D

    Console video resolution - what's the "Right" way?

    yes, for me the answer turned out to be forcing the video mode, which I commented on in this thread : https://forum.proxmox.com/threads/changing-host-console-resolution.12408/ Short version - add the following to the END of your GRUB_CMDLINE_LINUX_DEFAULT - then "update-grub" then reboot...
  13. D

    Changing host "console" resolution?

    however, THIS works, and doesn't disable GVT/etc - append THIS to the end of your GRUB_CMDLINE_LINUX_DEFAULT video=1024x768@60 (change appropriately for your output device) also check out "dpkg-reconfigure console-setup" to change console font sizes.
  14. D

    PMX 8.1.4 - Post upgrade - Network throughput problem - Iperf3 0.00 bits/s

    Miserable, sorry you're having so much trouble! I've been following the threads over on facebook as well, and while I can't nail down a specific spot (not knowing your network) in the past when I've had these types of issues, it's been one of three things: 1. MTU issue (as the others in the FB...
  15. D

    PMX 8.1.4 - Post upgrade - Network throughput problem - Iperf3 0.00 bits/s

    Ok, here's an OVS config from one of my lab nodes: VLAN5/500/501 : management VLAN800/850 : CEPH CLIENT VLAN900/950: CEPH BACKEND auto lo iface lo inet loopback ##enp7s0 - management interfaces auto enp7s0f0 iface enp7s0f0 inet manual mtu 9000 ovs_mtu 8896 auto enp7s0f1 iface...
  16. D

    PMX 8.1.4 - Post upgrade - Network throughput problem - Iperf3 0.00 bits/s

    so long shot, but have you considered moving from linux bond to OVS (Open vSwitch) to see if the behavior changes? something in the back of my brain is tickling RE: 5.X to 6.x kernel changes... im traveling so i can't copy/paste config, but IMHO its worth trying. overall OVS has been much more...
  17. D

    Bcache in NVMe for 4K and fsync. Are IOPS limited?

    Agreed. And much safer than moving the DB/WAL over to another device. Using bcache, if the cache device fails, it just falls through to the disks. In CEPH if the DB/WAL device fails - you lose all the devices it was serving. :(
  18. D

    Docker to PVE-LXC conversion steps/tool?

    I appreciate your viewpoint, and when it's other people's network, money, and datacenter space, I agree, it's not as big of a deal. When you have cores to burn, there's not as much difference between the two. In my case, I'm moving my home lab from 5 machines, 160 cores, 1280G of ram, and 40...
  19. D

    Docker to PVE-LXC conversion steps/tool?

    B Been down that road. Got docker-swarm working in LXC-priv containers. Preferred it to KVM, but you can't snapshot them to back them up (long story) so moving away from that as well. 99.9% of what is running in my lab & home clusters are LXC containers. (even moved plex/jellyfin into...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!