Recent content by dlasher

  1. D

    Can I move a CEPH disk between nodes?

    Didn't work for me, but it's been a while, and the devs are good about sneaking in silent improvements. :)
  2. D

    NFS server over ipv6

    Has this been updated? NFS mounts allowed via IPv6 yet?
  3. D

    Console video resolution - what's the "Right" way?

    yes, for me the answer turned out to be forcing the video mode, which I commented on in this thread : https://forum.proxmox.com/threads/changing-host-console-resolution.12408/ Short version - add the following to the END of your GRUB_CMDLINE_LINUX_DEFAULT - then "update-grub" then reboot...
  4. D

    Changing host "console" resolution?

    however, THIS works, and doesn't disable GVT/etc - append THIS to the end of your GRUB_CMDLINE_LINUX_DEFAULT video=1024x768@60 (change appropriately for your output device) also check out "dpkg-reconfigure console-setup" to change console font sizes.
  5. D

    PMX 8.1.4 - Post upgrade - Network throughput problem - Iperf3 0.00 bits/s

    Miserable, sorry you're having so much trouble! I've been following the threads over on facebook as well, and while I can't nail down a specific spot (not knowing your network) in the past when I've had these types of issues, it's been one of three things: 1. MTU issue (as the others in the FB...
  6. D

    PMX 8.1.4 - Post upgrade - Network throughput problem - Iperf3 0.00 bits/s

    Ok, here's an OVS config from one of my lab nodes: VLAN5/500/501 : management VLAN800/850 : CEPH CLIENT VLAN900/950: CEPH BACKEND auto lo iface lo inet loopback ##enp7s0 - management interfaces auto enp7s0f0 iface enp7s0f0 inet manual mtu 9000 ovs_mtu 8896 auto enp7s0f1 iface...
  7. D

    PMX 8.1.4 - Post upgrade - Network throughput problem - Iperf3 0.00 bits/s

    so long shot, but have you considered moving from linux bond to OVS (Open vSwitch) to see if the behavior changes? something in the back of my brain is tickling RE: 5.X to 6.x kernel changes... im traveling so i can't copy/paste config, but IMHO its worth trying. overall OVS has been much more...
  8. D

    Bcache in NVMe for 4K and fsync. Are IOPS limited?

    Agreed. And much safer than moving the DB/WAL over to another device. Using bcache, if the cache device fails, it just falls through to the disks. In CEPH if the DB/WAL device fails - you lose all the devices it was serving. :(
  9. D

    Docker to PVE-LXC conversion steps/tool?

    I appreciate your viewpoint, and when it's other people's network, money, and datacenter space, I agree, it's not as big of a deal. When you have cores to burn, there's not as much difference between the two. In my case, I'm moving my home lab from 5 machines, 160 cores, 1280G of ram, and 40...
  10. D

    Docker to PVE-LXC conversion steps/tool?

    B Been down that road. Got docker-swarm working in LXC-priv containers. Preferred it to KVM, but you can't snapshot them to back them up (long story) so moving away from that as well. 99.9% of what is running in my lab & home clusters are LXC containers. (even moved plex/jellyfin into...
  11. D

    lxc.mount.entry vs mp0

    Great answer. I assume that matters for things like snapshots and backups?
  12. D

    [SOLVED] Version 8 installation no display

    Just keep in mind if you care about iGPU passthrough (like intel) - nomodeset breaks Intel GVT-G The *right* answer is for proxmox to allow you to pick a console resolution at the start of the install, then write that during the install, then boot afterwards using that resolution. As screen...
  13. D

    Setting Up LACP Bond with VLAN Trunk and Bridge

    If you're to do both link aggregation and vlan tags, I've found it *far* easy to use Open-vSwitch. apt update apt install openvswitch-switch openvswitch-common Here's an example showing two ethernet interfaces, bundled into an 802.3ad/LACP bundle, with tagged VLAN's, including proxmox...
  14. D

    Docker to PVE-LXC conversion steps/tool?

    doing that already. KVM has: more disk space overhead more cpu overhead more ram overhead doesn't directly offer peripheral reuse (shared gpus) especially on smaller boxes (4-8 cores) running a thin LXC is much preferred to a fat KVM. That being said, that's how I'm running Frigate right now...
  15. D

    Bcache in NVMe for 4K and fsync. Are IOPS limited?

    As a follow up -- did just that, running ceph on top of bcache for the last 12 months, zero issues. The access times are great (thank you NVME) and the rebuild times are much faster.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!