Search results

  1. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    On a good path, getting closer to where I want to be. Problem 1 (not pmx's fault) * had a boot drive (zfs mirror) fail - missed a step in the cloning, so while pmx was BOOTING off a drive, it was only writing grub changes to the OTHER drive. This made me chase my tail for hours on why changes...
  2. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    No love so far. [ 13.758120] ixgbe 0000:09:00.1 eth12: renamed from eth2 [ 13.780906] ixgbe 0000:09:00.0 eth11: renamed from eth0 [ 13.812475] igb 0000:07:00.1 eth2: renamed from eth3
  3. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    Found this too - removing and rebooting: /lib/systemd/network# more 99-default.link # SPDX-License-Identifier: LGPL-2.1-or-later # # This file is part of systemd. # # systemd is free software; you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public...
  4. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    used to hate ifnames, trying to get over it. Migrating 100% functional hosts, which had "net.ifnames=0" in grub, and 70-persistent-net-rules set up. Removed both things, updated init, and they stubbornly refuse to rename to enp* style names, after multiple reboots. set up explicit LINK files...
  5. D

    Question: Guide to replicating a pool?

    I'm looking for a guide) on how to copy from an existing pool to a new pool. 1. if the source is KVM/LXC images? 2. If the source is CephFS? CephFS + EC? TheGoogle's have not provided any solid directions, old threads,(sounds like cppool is out of vogue) and I'm sure it's something the CEPH...
  6. D

    Can I move a CEPH disk between nodes?

    Stealing some of the pieces from another thread - just went through this myself, figured I'd share what worked 100% - about 20 drives completed so far, zero issues. IMPORTANT: this assumes DB/WAL are on a single physical drive. If they aren't, you'll have to consolidate them down first, then...
  7. D

    Bcache in NVMe for 4K and fsync. Are IOPS limited?

    Curious where you ended up on this @adriano_da_silva - considering migrating one of my clusters from NVME DB/WAL + Spinners, to bcache NVME+Spinners.. Curious what 6 months of experience had done to your views.
  8. D

    Error - 'Module 'devicehealth' has failed:'

    + +1 this solved the issue for me too. pve-manager/7.3-4/d69b70d4 (running kernel: 5.15.83-1-pve)
  9. D

    CEPH Configuration: CephFS only on a Specific Pool?

    realize this is a late answer, but ran into this thread as I prepare to re-pool my metadata. Short answer: Each pool must be "enabled" for specific applications. When you create the pool via the UI or command line, it defaults to RBD. You can see which applications are enabled for your pool...
  10. D

    [SOLVED] Some LXC CT not starting after 7.0 update

    Had the opportunity to try it @Elfy 's way today -- much cleaner -- thanks for sharing!
  11. D

    Delaying VM's until ceph/cephFS is mounted

    Fantastic, thank you - will give it a shot - The option I suggested above seems to work 80% of the time, but leaves at least one node with no started services. :(
  12. D

    Delaying VM's until ceph/cephFS is mounted

    In larger clusters, it can be quite a few seconds until all the OSD's are happy and cephfs is able to mount. Just restarted one cluster today (power loss) and noticed that while all the KVM's started fine, any LXC that used a CEPHFS bind-mount wouldn't start until cephfs was ready. (got unable...
  13. D

    Ceph uses false osd_mclock_max_capacity_iops_ssd value

    Same EXACT problem here, never ran into this before, but it's been a few months since i had a drive fail.. somewhere along the lines this came in as defaults.. been PULLING MY HAIR OUT trying to figure out what was going on. Stumbled across the values in the "CONFIGURATION DATABASE" section...
  14. D

    Single ring failure causes cluster reboot? (AKA: We hates the fencing my precious.. we hates it..)

    Just to make sure I understand this correctly: If I remove all the HA-configured LXC/KVM settings (I have DNS servers, video recorders, etc) and make them stand alone no-failover configs, it won't fence if Corosync gets unhappy? (That doesn't seem to ring true to me in a shared storage world.)
  15. D

    Single ring failure causes cluster reboot? (AKA: We hates the fencing my precious.. we hates it..)

    Thanks, gave it some thought, and changed the priorities a bit - we'll see if it does better than it has in the past. (Also has me thinking about things that could lower the latency between nodes, like MTU on the ring interfaces) It would be nice to gather raw data on keepalives across all...
  16. D

    Single ring failure causes cluster reboot? (AKA: We hates the fencing my precious.. we hates it..)

    Yeah, feels like only ceph replication has saved me from the heavy hand of rebooting. :( What would you suggest?
  17. D

    Proxmox cluster reboots on network loss?

    Sorry to necro this thread, but it's one of *many* that come up with this title, and it's directly to the core issue. Proxmox needs to have an configurable option for behavior on Fencing. Rebooting an entire cluster upon the loss of a networking element is the sledgehammer, and we need the scalpel.
  18. D

    Single ring failure causes cluster reboot? (AKA: We hates the fencing my precious.. we hates it..)

    Thank you, fantastic information, already used it to clean things up a bit. Not if PMX thinks we need to reboot. So far, none of the failures have taken down CEPH, it's pmx/HA that gets offended. (Ironic, because corosync/totem has (4) rings, and CEPH sits on a single vlan, but I digress) The...
  19. D

    Single ring failure causes cluster reboot? (AKA: We hates the fencing my precious.. we hates it..)

    @fabian -- any thoughts on this question? I'd love to have more control over the failure steps/scenerios.
  20. D

    Single ring failure causes cluster reboot? (AKA: We hates the fencing my precious.. we hates it..)

    That makes sense, thank you, I didn't understand the corosync/totem/cluster-manager inter-op. (Is this written up anywhere I can digest?) I'll drop the timeouts back to default values. Since I know how to cause the meltdown, it will be easy to test the results of the change. How would you...