I went through the process to upgrade from 7.4 to 8, in trying to get the pve7to8 tool onto my machine I appear to have corrupted the install so badly that when trying to boot it goes through bios and then has a black screen. No errors or anything.
I can see from a recovery image that the Hard...
Hello,
I am running into issues with corrupted data in VM. I am quite new to Proxmox and do not know how to search to find the reason for as I did a couple of changes in my system.
In my home system, I migrated my VMs from Hyper-V to Proxmox on the same hardware some weeks ago. All was running...
Hello experts :)
I just upgraded to a new server and I'm currently restoring containers. Some of them have GPT corruption warnings.
Example:
Disk /dev/mapper/pve-vm--100--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical)...
Hello everyone
I tested whether I could migrate VMs from PVE 6.4-13 to PVE 7.0-11, using backup and restore, which unfortunately fails. Both PVEs share an equal setup except for their Proxmox version, have an LVM thin pooled storage, were installed from Debian (Buster, Bullseye, respectively)...
Hi,
I am using an Intel NUC 8i5 with a Samsung 860 Pro (500GB) and a Crucial P2 (1TB) SSD. The Samsung SSD is used for Proxmox and the Crucial SSD as storage for the virtual machines. Yesterday the status of the Crucial SSD changed to degraded. After a reboot of the node this SSD is no longer...
I'm running a couple of Ubuntu 18.04.4 server VMs. The VMs run fine but when I try to view the VM using the "NoVNC console", I get the attached screen corruption? When starting the VM, the "Proxmox" splashscreen is shown fine in the console, but as soon as the VM commences boot, the display is...
Hello,
I have problem with my ZFS pool. My RAM recently flipped bit and as a result my pool got corrupt by permanent error (zpool status -vx):
errors: Permanent errors have been detected in the following files:
rpool/data/vm-101-disk-1@experimenty:<0x1>...
4 months ago I installed PVE v4.4 on an eXT4 filesystem SSD. I put about 10 different containers on it and was the only user. I didn't use ceph or HA, and I had no other clusters--it was just one node with about 10 different LXCs. All containers were under 50% usage in their allotted storage...
For the past 4 months I had PVE 4.4 running exclusively on a 250GB m.2 SSD via PCIe adapter (EXT4 filesystem). The drive had less than 4 total TBs written to it, I had 12 Ubuntu containers, and <50% of overall total space utilization as well as <50% of consumed space within each container. Then...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.