Search results

  1. Single ring failure causes cluster reboot? (AKA: We hates the fencing my precious.. we hates it..)

    Someone please explain to me why the loss of a single ring should force the entire cluster (9 hosts) to reboot? Topology - isn't 4 rings enough?? ring0_addr: 10.4.5.0/24 -- eth0/bond0 - switch1 (1ge) ring1_addr: 198.18.50.0/24 -- eth1/bond1 - switch2 (1ge) ring2_addr...
  2. PMX7.0 - HA - preventing entire cluster reboot

    pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve) - (5) node cluster, full HA setup, CEPH filesystem How do I prevent HA from rebooting the entire cluster? 20:05:39 up 22 min, 2 users, load average: 6.58, 6.91, 5.18 20:05:39 up 22 min, 1 user, load average: 4.34, 6.79, 6.23...
  3. Ceph 16.2.6 - CEPHFS failed after upgrade from 16.2.5

    TL;DR - Upgrade from 16.2.5 to 16.2.6 - CEPHFS fails to start after upgrade, all MDS in "standby" - requires ceph fs compat <fs name> add_incompat 7 "mds uses inline data" to work again. Longer version : pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve) apt dist-upgraded, CEPH...
  4. Recommended Config : Multiple CephFS

    been running around in circles trying to figure this out.. what's the best/most-direct way to get more than 1 CephFS running/working on a pmx7 cluster with the pool types NOT matching? IE, I'd like to have the following: 1. /mnt/pve/cephfs - replicated, SSD 2. /mnt/pve/ec_cephfs - erasure...
  5. CUDA in LXC Container

    Wondering if anyone has been able to make nvidia-smi/cuda/etc work in an LXC container. Feels like I'm close...configs added correctly in LXC: lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file,uid=65534,gid=65534 lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none...
  6. Proxmox 4.2-17 Networking not starting

    stood up a new machine side-by-side with my existing 3.x PMX installation, slowing learning the new. Oddly the PMX4 box isn't starting networking on boot. "service networking start" brings everything up. .and this is in the log. Ideas? 22.219739] systemd[1]: Cannot add dependency job for...
  7. PMX4 - ZFS - zfs_arc_max - Can't exceed 32G?

    New install of PMX4 (4.4.6-1-pve) - backup/restored my containers from pmx3, imported my ZFS pools, and (almost) everything is peachy.. New machine has 4x the RAM, so I'm looking to increase the amount of ram ZFS is allowed to use.. Worked out the math, and increased zfs_arc_max to 64G, by...
  8. ZFS & vzdump

    ok, I know I must be missing something obvious, but I can't make vzdump work well with ZFSonLinux (Same box solution). When I first ran into this, I figured, "I'll just enable snapshots on the ZFS side, and forget the backups", but now, 12 months later, I'd actually like to try to solve it...
  9. vzstop stop --fast - behavior crashing box

    problem: shut down Proxmox (which shuts down containers via "vzctl stop 103 --fast") and the box will crash with a kernel hang. if you ssh/vzctl in, and shutdown the container (either with a shutdown -h now inside the container, or a NON-fast stop from the pmx CLI), then shutdown the box, no...
  10. Proxmox / ZFS / Single-box solution

    Given all the awesome features of ZFS (and the awful zfs-on-linux speed/features/etc), it's a pity there's no way at present to get Proxmox running on FreeBSD-based distro (where ZFS v28 is supported)... From what I can see you, can do openVZ and KVM on FreeBSD 9.x.. (however FreeBSD uses jails...
  11. Request: Quick guide to migrate system drive

    I'm running out of space on the system drive of a PMX3 installation, and I'd like to migrate it to a bigger drive. I have the new drive connected, and was looking for a quick guide on now to migrate everything to the new drive. I intend to remove the old drive, and boot from the new drive. (I'm...
  12. Unclean container shutdown due to "--fast" option

    Having issues with "kernel:BUG: soft lockup - CPU#X stuck for 67s! [umount:xxxx]", being caused by PMX UI calling "vzctl stop xxx --fast" Container, when "shutdown -h" inside container, shuts down fine. When stopped via UI, tends to hang box. Is there any way to change the UI behavior to NOT...
  13. USB 3.0 support in Proxmox 3.0? {2.6.32-20-pve}

    On this thread ( http://forum.proxmox.com/threads/10884-USB-3-Compatibility-for-Proxmox-2-1 ) I tried to get several devices working under 2.x, and wasn't able to. I'm looking for a list of supported devices for USB3 under Proxmox 3.0 {2.6.32-20-pve} I looked under the kernel-headers, and...
  14. Proxmox 3.0 - proper OpenVZ shutdown steps

    one of the PMX3 boxes I help support is acting strange on shutdown, need to clarify function. When shutdown WITHIN the VM (vzctl enter - shutdown -h) then reboot PMX box, all is fine. When shutdown initiated from PMX (which should shutdown the containers) box will hardlock and not reboot...
  15. Flashcache on Proxmox 3.x

    Anyone have this working yet, want to share? (Would be great to see flashcache/bcache as an install option.. seems used a lot)
  16. [proxmox 3.x] pve-kernel-2.6.32-20-pve kernel source?

    Looking for kernel source (not headers) for 3.0 (pve-kernel-2.6.32-20-pve)? Where do I find them, and what are they called?
  17. Proxmox 3.0 - appearance of "bind" mounts

    in pmx2.3 'bind' mounts (IE passthroughs for containers) didn't show up in the 'df'. Now, they do. makes DF kinda confusing.. any way to suppress them being listed, or am I doing this wrong? IE: /var/lib/vz/private/200 8388608 573428 7815180 7% /var/lib/vz/root/200 /dev/sda1...
  18. 3.0 upgrade, container disappeared?

    [[EDIT :: after about 20 minutes, it finally started/mounted 203 and 204.. files are all there, containers started... :: ]] did the 2.3 to 3.0 upgrade, changed the network config, took a couple of reboots, but box is up, vlans are working. However, I notice that one of the openVZ containers...
  19. 3.0, KVM, and "TAP" device

    upgraded one of my lab nodes to 3.0 as a test, from 2.3.. so far so... not good? none of my KVM's still start, saying the following: /var/lib/qemu-server/pve-bridge: could not launch network script kvm: -netdev type=tap,id=net0,ifname=tap502i0,script=/var/lib/qemu-server/pve-bridge: Device...
  20. PVE 2.3 - 2.6.32.19 kernel panics

    Was running 2.6.32-12, upgraded to 2.6.32-19, since I've taken (4) kernel panic/hard crash in the last 4 days. :( I can't find anywhere in /var/log those are dumped, so I'm at the mercy of my memory.. the Crash was caused by individual OpenVZ container, with lots of complaints about XFS Writes...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!