Search results

  1. D

    Docker to PVE-LXC conversion steps/tool?

    Wondering if anyone has a script/directions/etc to convert docker images into Proxmox Compatible LXC containers? Several projects I use are only available as docker containers, and I'd rather run it in an LXC than run docker JUST for them. I've found a few resources online that deal with...
  2. D

    Console video resolution - what's the "Right" way?

    Since pve v5, I've struggled with getting a usable video resolution on the proxmox console. Especially with servers that have VGA output, proxmox tends to default to the highest possibly supported resolution, often leaving the KVM or rack-mounted-VGA out of range. Up to this point, I've been...
  3. D

    CEPH 17.2.7 - "ceph device ls" is wrong

    Just ran into this in the lab, haven't gone digging in prod yet. pve-manager/8.1.3/b46aac3b42da5d15 (running kernel: 6.2.16-20-pve) Cluster is alive, working, zero issues, everything in GUI is happy, 100% alive -- however... the "ceph device" table appears to have NOT updated itself for a...
  4. D

    PVE8 - Change to IOMMU/Passthrough?

    Recently upgraded from PMX7 (Proxmox Version 7 or PVE7) to PMX8 (Proxmox Version 8 or PVE8), and I noticed the passthrough doesn't seem to work the same way in the UI it used to. <= PVE7, the way you knew IOMMU/etc wasn't working is that in the UI, the "MAPPED DEVICES" dropdown would be empty...
  5. D

    Recommendation Request: Dual Node Shared Storage

    Had a client request a fully redundant dual-node setup, and most of my experience has been either with single node (ZFS FTW) or lots of nodes (CEPH FTW). Neither of those things seem to work well in a dual node fully redundant setup. Here's my thinking, wanted to see what the wisdom of the...
  6. D

    [PMX8] Intel VFIO issues - console "locks up" but VM's all start fine.

    pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-12-pve) Dual Intel x5690 CPU (also tested with Intel 26xx v1/v2 series) Nvidia GT720 VGA outout (also tested with motherboard VGA (matrox) output) Using VFIO to pass through cards to several VM's. Problem: As soon as the console booting...
  7. D

    PVE8: LXC containers on lvm storage fail to start after upgrade with acl=0

    Container fails to start after upgrade, not seeing anything obvious. lxc-start 61001 20230704010253.978 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 61001 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs...
  8. D

    PVE8: breaks networking during upgrade leaving node unusable

    Upgraded my tiny lab cluster today - 3 headless nodes of miniPC, (2) with C2930, and (1) with N3160. Identical drives, ram. dual NIC, LACP, managed by open-vswitch, with multiple vlan's (including management) across the bundle. all had green pve7to8 reports. All dumped networking at the same...
  9. D

    cephFS not mounting till all nodes are up (7.3.6)

    5 node deployment in lab, noticed something odd. Cephfs fails to mount on any node nodes until *ALL* nodes are up. IE 4 of 5 machines up, cephfs still fails. Given the pool config of cephfs_data and cephfs_metadata (both 3/2 replicated) I don't understand why this would be the case. In theory...
  10. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    used to hate ifnames, trying to get over it. Migrating 100% functional hosts, which had "net.ifnames=0" in grub, and 70-persistent-net-rules set up. Removed both things, updated init, and they stubbornly refuse to rename to enp* style names, after multiple reboots. set up explicit LINK files...
  11. D

    Question: Guide to replicating a pool?

    I'm looking for a guide) on how to copy from an existing pool to a new pool. 1. if the source is KVM/LXC images? 2. If the source is CephFS? CephFS + EC? TheGoogle's have not provided any solid directions, old threads,(sounds like cppool is out of vogue) and I'm sure it's something the CEPH...
  12. D

    Delaying VM's until ceph/cephFS is mounted

    In larger clusters, it can be quite a few seconds until all the OSD's are happy and cephfs is able to mount. Just restarted one cluster today (power loss) and noticed that while all the KVM's started fine, any LXC that used a CEPHFS bind-mount wouldn't start until cephfs was ready. (got unable...
  13. D

    Single ring failure causes cluster reboot? (AKA: We hates the fencing my precious.. we hates it..)

    Someone please explain to me why the loss of a single ring should force the entire cluster (9 hosts) to reboot? Topology - isn't 4 rings enough?? ring0_addr: 10.4.5.0/24 -- eth0/bond0 - switch1 (1ge) ring1_addr: 198.18.50.0/24 -- eth1/bond1 - switch2 (1ge) ring2_addr...
  14. D

    PMX7.0 - HA - preventing entire cluster reboot

    pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve) - (5) node cluster, full HA setup, CEPH filesystem How do I prevent HA from rebooting the entire cluster? 20:05:39 up 22 min, 2 users, load average: 6.58, 6.91, 5.18 20:05:39 up 22 min, 1 user, load average: 4.34, 6.79, 6.23...
  15. D

    Ceph 16.2.6 - CEPHFS failed after upgrade from 16.2.5

    TL;DR - Upgrade from 16.2.5 to 16.2.6 - CEPHFS fails to start after upgrade, all MDS in "standby" - requires ceph fs compat <fs name> add_incompat 7 "mds uses inline data" to work again. Longer version : pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve) apt dist-upgraded, CEPH...
  16. D

    Recommended Config : Multiple CephFS

    been running around in circles trying to figure this out.. what's the best/most-direct way to get more than 1 CephFS running/working on a pmx7 cluster with the pool types NOT matching? IE, I'd like to have the following: 1. /mnt/pve/cephfs - replicated, SSD 2. /mnt/pve/ec_cephfs - erasure...
  17. D

    CUDA in LXC Container

    Wondering if anyone has been able to make nvidia-smi/cuda/etc work in an LXC container. Feels like I'm close...configs added correctly in LXC: lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file,uid=65534,gid=65534 lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none...
  18. D

    Proxmox 4.2-17 Networking not starting

    stood up a new machine side-by-side with my existing 3.x PMX installation, slowing learning the new. Oddly the PMX4 box isn't starting networking on boot. "service networking start" brings everything up. .and this is in the log. Ideas? 22.219739] systemd[1]: Cannot add dependency job for...
  19. D

    PMX4 - ZFS - zfs_arc_max - Can't exceed 32G?

    New install of PMX4 (4.4.6-1-pve) - backup/restored my containers from pmx3, imported my ZFS pools, and (almost) everything is peachy.. New machine has 4x the RAM, so I'm looking to increase the amount of ram ZFS is allowed to use.. Worked out the math, and increased zfs_arc_max to 64G, by...
  20. D

    ZFS & vzdump

    ok, I know I must be missing something obvious, but I can't make vzdump work well with ZFSonLinux (Same box solution). When I first ran into this, I figured, "I'll just enable snapshots on the ZFS side, and forget the backups", but now, 12 months later, I'd actually like to try to solve it...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!