https://www.intel.com/content/www/us/en/support/articles/000091057/ethernet-products/500-series-network-adapters-up-to-10gbe.html
nvm-update-utility doesn't work with the x520 cards unfortunately.
I'm also having these same issues. removing the bridge and rescanning doesn't work for me. Downgrading the kernel did though; Thanks for that!
For future troubleshooting:
root@pve01:~/drivers# lshw -class network
*-network UNCLAIMED
description: Ethernet controller
product...
When this problem happened to me back in Jan(also caused by a proxmox upgrade), the only way I was able to recover was by rebuilding each node, one by one.
David, this is really cool. I'm trying to replicate this on my setup. I have 7 servers all with a single 1tb NVME drive that I'm using for ceph. I know it's not the ideal setup but I'm limited by the hosting company and cost.
I did the following:
ceph osd erasure-code-profile set CephEC \...
In pve7, this is the script I was able to use to reinstall ceph without issue:
systemctl restart pvestatd
rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /etc/ceph /etc/pve/ceph.conf /etc/pve/priv/ceph* /var/lib/ceph
pveceph purge
systemctl restart pvestatd
apt...
Alright, That problem was caused by having cephx disabled while at the same time having storage keys at /etc/pve/priv/ceph/
I removed the storage keys at /etc/pve/priv/ceph/ and that fixed the issue.
I had tried copying the new admin keyring over the ceph storage keyring however that didn't...
I disabled cephx and recreated the client.admin token. I then copied the token into the existing client.admin keyring and copied that to the other servers.
My VM's that have drives on ceph are able to launch and read the data from them now. I assume this is because of cephx being off. Proxmox...
Ya, I tried posting to the users mailing list, however it doesn't appear that the message has made it through. I'm registered properly on there. Not sure why my message won't appear.
I'll try disabling cephx and recreating the client.admin that way.
I've tried that as well, although I get a different error:
root@pve02:/var/lib/ceph/mon/ceph-pve02# ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pve02/keyring get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *'
2021-10-27T17:06:59.288+0000 7fb77b16b700 -1 auth...
It doesn't. I get the same
-1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
[errno 13] RADOS permission denied (error connecting to the cluster) error
Hi Everyone,
I Accidentally ran `ceph auth rm client.admin` from one of my monitor nodes. I was following a tutorial for adding ceph to k8s and misunderstood one of the steps on the tutorial.
Anytime I try to run a command from any of the nodes now I get the following error...
So for me this happened on HDD's. My ZFS pool is a zfs raid1 mirror between two 6tb hdd's:
HGST_HUS726T6TALE6L1
root@pve01:~# zpool status
pool: zfs01
state: ONLINE
scan: scrub repaired 0B in 00:02:29 with 0 errors on Sun Sep 12 00:26:30 2021
config:
NAME...
Tested this on a Windows 2019 template. It doesn't appear to be applying any of the settings. When I watch the cloud-init output on the console I'm seeing the following error when the network plugin goes to run:
2021-09-07 19:36:38.193 4008 INFO cloudbaseinit.init [-] Executing plugins for...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.