Search results

  1. D

    [SOLVED] [Proxmox 8] [Kernel 6.2.16-4-pve]: ixgbe driver fails to load due to PCI device probing failure

    Honestly, I think it's just a matter of waiting for Intel to make a set of ixgbe drivers for the 6.2 kernel.... Or maybe we'll luck out with 6.3? :D
  2. D

    [SOLVED] [Proxmox 8] [Kernel 6.2.16-4-pve]: ixgbe driver fails to load due to PCI device probing failure

    https://www.intel.com/content/www/us/en/support/articles/000091057/ethernet-products/500-series-network-adapters-up-to-10gbe.html nvm-update-utility doesn't work with the x520 cards unfortunately.
  3. D

    [SOLVED] [Proxmox 8] [Kernel 6.2.16-4-pve]: ixgbe driver fails to load due to PCI device probing failure

    I'm also having these same issues. removing the bridge and rescanning doesn't work for me. Downgrading the kernel did though; Thanks for that! For future troubleshooting: root@pve01:~/drivers# lshw -class network *-network UNCLAIMED description: Ethernet controller product...
  4. D

    Ceph Recovery after all monitors are lost?

    When this problem happened to me back in Jan(also caused by a proxmox upgrade), the only way I was able to recover was by rebuilding each node, one by one.
  5. D

    Ceph Recovery after all monitors are lost?

    How would you recover from this without a backup of the crush map?
  6. D

    Created an erasure code pool in ceph , but cannot work with it in proxmox

    David, this is really cool. I'm trying to replicate this on my setup. I have 7 servers all with a single 1tb NVME drive that I'm using for ceph. I know it's not the ideal setup but I'm limited by the hosting company and cost. I did the following: ceph osd erasure-code-profile set CephEC \...
  7. D

    Reinstall CEPH on Proxmox 6

    In pve7, this is the script I was able to use to reinstall ceph without issue: systemctl restart pvestatd rm -rf /etc/systemd/system/ceph* killall -9 ceph-mon ceph-mgr ceph-mds rm -rf /etc/ceph /etc/pve/ceph.conf /etc/pve/priv/ceph* /var/lib/ceph pveceph purge systemctl restart pvestatd apt...
  8. D

    Accidentally ran ceph auth rm client.admin from one of my monitor nodes

    Alright, That problem was caused by having cephx disabled while at the same time having storage keys at /etc/pve/priv/ceph/ I removed the storage keys at /etc/pve/priv/ceph/ and that fixed the issue. I had tried copying the new admin keyring over the ceph storage keyring however that didn't...
  9. D

    Accidentally ran ceph auth rm client.admin from one of my monitor nodes

    I disabled cephx and recreated the client.admin token. I then copied the token into the existing client.admin keyring and copied that to the other servers. My VM's that have drives on ceph are able to launch and read the data from them now. I assume this is because of cephx being off. Proxmox...
  10. D

    Accidentally ran ceph auth rm client.admin from one of my monitor nodes

    Ya, I tried posting to the users mailing list, however it doesn't appear that the message has made it through. I'm registered properly on there. Not sure why my message won't appear. I'll try disabling cephx and recreating the client.admin that way.
  11. D

    Accidentally ran ceph auth rm client.admin from one of my monitor nodes

    I've tried that as well, although I get a different error: root@pve02:/var/lib/ceph/mon/ceph-pve02# ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pve02/keyring get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *' 2021-10-27T17:06:59.288+0000 7fb77b16b700 -1 auth...
  12. D

    Accidentally ran ceph auth rm client.admin from one of my monitor nodes

    It doesn't. I get the same -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2] [errno 13] RADOS permission denied (error connecting to the cluster) error
  13. D

    Accidentally ran ceph auth rm client.admin from one of my monitor nodes

    Hi Everyone, I Accidentally ran `ceph auth rm client.admin` from one of my monitor nodes. I was following a tutorial for adding ceph to k8s and misunderstood one of the steps on the tutorial. Anytime I try to run a command from any of the nodes now I get the following error...
  14. D

    Out of memory crash during hdd speedtest in vm

    So for me this happened on HDD's. My ZFS pool is a zfs raid1 mirror between two 6tb hdd's: HGST_HUS726T6TALE6L1 root@pve01:~# zpool status pool: zfs01 state: ONLINE scan: scrub repaired 0B in 00:02:29 with 0 errors on Sun Sep 12 00:26:30 2021 config: NAME...
  15. D

    [TUTORIAL] windows cloud init working

    Tested this on a Windows 2019 template. It doesn't appear to be applying any of the settings. When I watch the cloud-init output on the console I'm seeing the following error when the network plugin goes to run: 2021-09-07 19:36:38.193 4008 INFO cloudbaseinit.init [-] Executing plugins for...
  16. D

    [TUTORIAL] windows cloud init working

    I've got this mostly working, however I can't get the user password or DNS to set.. even with the edits.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!