Good afternoon...
I was able to upgrade to 3.0 but can only boot into the server using 5.15.108-1-pve the server will hang if using 6.2.16-3-pve.
Not sure where to go from here since there's no way to look at logs. Hangs at loading the video driver I'm assuming.
So just for my understanding these drives say they are SLC yet they seem extremely inexpensive which makes me question the SLC claim.
https://smile.amazon.com/stores/page/4D9F4034-8479-4116-9462-4596E6B04E62?ingress=2&visitId=707604af-ca2d-4443-ab5f-de7528899716&ref_=ast_bln
Good to know @_gabriel. @Dunuin & @brucexx how can you differentiate between the types of SSD's without relying on a company telling you it is...? What spec or feature tells you that it's an enterprise level piece of hardware...?
My main interest in using SSD is that as I understand it the IO...
Disregard this is my stupidity... the VM has to be obviously shutdown in order to remove. So was looking at that issue incorrectly. Thanks for the tip though on getting rid of those ghost devices.
Thanks that worked on getting rid of the ghost node and vm's but I still have the inability to Remove vm's oddly enough the option is greyed out.
I noticed it while trying to figure out if there was some way to delete the ghost vm's prior to your recommendation.
One of the nodes in my cluster died and I cannot seem to get rid of it.
I did a: pvecm delnode sn-pve-34 and it no longer shows up when doing a pvecm status but it is still showing up in server view and the vm's that were on it when it died (I restored the vm's from backup to a working node...
Good afternoon...
I've read through the forum trying to find the answer but I am still confused as to whether or not it is a good idea to use SSD drives. My cluster uses LVM-Thin
One of my hypervisors died (thank God for Proxmox Backup :)) and trying to decide on hardware for it's replacement...
So weird thing is there was only 1 disk listed in the VM hardware. I had this issue with two other VM's on that node. You would try migrating and it would claim there were other disks on the VM. Additional strange thing is I know when I initially built the VM I used disk 0 so I'm also baffled on...
2021-11-24 10:39:46 starting migration of VM 206 to node 'sn-pve-32' (192.168.5.32)
2021-11-24 10:39:46 found local disk 'local-zfs:vm-206-disk-0' (via storage)
2021-11-24 10:39:46 found local disk 'local-zfs:vm-206-disk-1' (via storage)
2021-11-24 10:39:46 found local disk...
Oddly enough live migration will not work either:
ERROR: migration aborted (duration 00:00:00): storage migration for 'local-zfs:vm-206-disk-0' to storage 'local-lvm' failed - cannot migrate from storage type 'zfspool' to 'lvmthin'
TASK ERROR: migration aborted
I have tried multiple iterations of the command with no success. The problem being the --targetstorage option. I have read several articles online and tried different strings and all with no success.
ERROR: migration aborted (duration 00:00:01): storage migration for...
Okay Fabian... would you be able to point me to the CLI to do that...?
Otherwise it seems I've figured everything else out.
Moving to EXT4 since I have my eyes on a couple servers with hardware raid now that I've use Proxmox for nearly a year now and LOVE it... and also ZFS seems to consume...
So more experimentation shows that I can migrate a VM that is running just not offline VM's...
With the running VM when migrating I can choose the LVM storage but with offline there is no option to choose 'target storage' is that by design...?
Hi Fabian...
Sorry my explanation was not complete... I went into the cluster configuration for disks and added the LVM-Thin for the new node only and also disabled ZFS from being seen on the new node. But I cannot see the disk to migrate VM's to it. I can see it to create a new VM though.
I have an existing cluster with all nodes using ZFS (all current nodes are raid 0). I decided to add a new node using the EXT4 filesystem after joining that new node to my existing cluster I cannot see the LVM file system.
I had the same issue after installing a new node and adding it to an existing cluster. I found this advice on Reddit and it fixed my issue with not being able to pull up the web gui
Remote into the server and typed "systemctl stop pvedaemon.service" followed by "systemctl start...
Can anyone clarify the procedure to set the memory limit for ZFS...? This is the link to the instructions below:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_limit_memory_usage
The procedure to make the change on temporary basis works perfectly and also as stated in the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.