The other disks seem to have been reassigned successfully already.
The ZFS process trying to rename (reassign) zfs_datastore/vm-121-disk-0 is hanging.
If you have a backup, please restore that disk.
Also please the output of
ps auxwf
Please note that your WD Red HDD is an SMR disk, which is known to perform poorly with ZFS.
The scrub process on your SSD is running very slowly, which also indicates an issue with the SSD. Please check if there is a firmware update available for the SSD.
you are referring to the the zpool state of zfs_datastore, right?
something went wrong on your filesystem on the SSD, possibly due to the hard power-off of the host.
ZFS is trying to evaluate the situation.
Please also post the output of
zpool status -v
tail -n +1...
It looks like you're underlying ZFS storage hangs while trying to move the disk.
To get more detail on your system please post the results of the following:
pveversion -v
zpool status
zfs list
ls -l /dev/disk/by-id
you don't need to start the VMs to manipute the disks.
run qm rescan on the host and look at the Hardware tab for the VM.
For your locking issue, please run the following and upload the file.
journalctl -b > /tmp/journal.log
You can configure the direct interface with static IP addresses on a different network than your other connection.
https://pve.proxmox.com/wiki/Network_Configuration
You can reassign the disk to the new VM via the web UI.
Imagine, your old omv VMID was 100, so the disks are named local-zfs:vm-100-disk-0
Your new omv VMID is 101.
You have to have (maybe create a tmp one) a VM with VMID 100.
Run qm rescan on the PVE node
Now the previous VM will show up in...
SSH know_hosts can be stored as IP as well as hostname.
When you changed the hostnames, you also need to change the /etc/hosts file.
Make sure you have the correct IP addresses matching the hostnames of the remote machines.
If you have your VMs backup through PVE, you can now copy them into the storage (with backup content enabled) of your new PVE setup.
You can find out where you need to copy by reading the Storage Documentation and https://pve.proxmox.com/wiki/Storage:_Directory
I see that your underlying partition is already in the correct size (1.8T)
Now you can resize the filesystem.
You can view the state of the pysical volume (PV) with
pvs
You need to resize the PV with
pvresize /dev/nvme1n1p3
now the logical volume (LV) has free space to grow as shown...
SDN.Use permission is needed from PVE 8.0 onwards.
You can limit the user/group permissions to specific VMs. Otherwise, what you're trying to achieve is unfortunately not possible.
It looks like you are intending significant changes to your hardware as well as your software setup.
Migrating from one software configuration to another might be tricky, and - as you also noticed - reinstalling Proxmox VE might be simpler on the new Hardware.
The easiest would be to do a...
I tried to set this up myself on PVE 8.0.4 with user@pam
The Pool.Audit permission is required to select the resource pool.
For the user, to view its own permissions, the Sys.Audit permission would be required. In my tests, I was unable to see it.
To clone a VM i also needed the SDN.Use...
You can configure the permissions to disallow VM.Console
https://pve.proxmox.com/pve-docs/chapter-pveum.html#_privileges
If you are accessing the cluster as root@pam, you will have to create a new user.
https://pve.proxmox.com/pve-docs/chapter-pveum.html#pveum_users
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.