David, this is really cool. I'm trying to replicate this on my setup. I have 7 servers all with a single 1tb NVME drive that I'm using for ceph. I know it's not the ideal setup but I'm limited by the hosting company and cost.
I did the following:
ceph osd erasure-code-profile set CephEC \...
In pve7, this is the script I was able to use to reinstall ceph without issue:
systemctl restart pvestatd
rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /etc/ceph /etc/pve/ceph.conf /etc/pve/priv/ceph* /var/lib/ceph
systemctl restart pvestatd
Alright, That problem was caused by having cephx disabled while at the same time having storage keys at /etc/pve/priv/ceph/
I removed the storage keys at /etc/pve/priv/ceph/ and that fixed the issue.
I had tried copying the new admin keyring over the ceph storage keyring however that didn't...
I disabled cephx and recreated the client.admin token. I then copied the token into the existing client.admin keyring and copied that to the other servers.
My VM's that have drives on ceph are able to launch and read the data from them now. I assume this is because of cephx being off. Proxmox...
Ya, I tried posting to the users mailing list, however it doesn't appear that the message has made it through. I'm registered properly on there. Not sure why my message won't appear.
I'll try disabling cephx and recreating the client.admin that way.
I've tried that as well, although I get a different error:
root@pve02:/var/lib/ceph/mon/ceph-pve02# ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pve02/keyring get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *'
2021-10-27T17:06:59.288+0000 7fb77b16b700 -1 auth...
I Accidentally ran `ceph auth rm client.admin` from one of my monitor nodes. I was following a tutorial for adding ceph to k8s and misunderstood one of the steps on the tutorial.
Anytime I try to run a command from any of the nodes now I get the following error...
So for me this happened on HDD's. My ZFS pool is a zfs raid1 mirror between two 6tb hdd's:
root@pve01:~# zpool status
scan: scrub repaired 0B in 00:02:29 with 0 errors on Sun Sep 12 00:26:30 2021
Tested this on a Windows 2019 template. It doesn't appear to be applying any of the settings. When I watch the cloud-init output on the console I'm seeing the following error when the network plugin goes to run:
2021-09-07 19:36:38.193 4008 INFO cloudbaseinit.init [-] Executing plugins for...
This is a vzbackup hook script that backups up your proxmox vms, containers and pve configs to remote storage such as google drive using proxmox's native vzbackup tool and rclone.
rclone is a command line tool that allows you to sync...
I also am seeing the same issue. I cannot get proxmox to connect to NFS:
root@pve-01:~# mount.nfs 10.100.0.20:/mnt/array1/PVE /mnt/temp -o 'vers=3' -vvv
mount.nfs: timeout set for Mon May 18 18:11:59 2020
mount.nfs: trying text-based options 'vers=3,addr=10.100.0.20'
mount.nfs: prog 100003...