I'm using Proxmox 7.2-7, and have a Ubuntu server vm. I tried to upgrade the server but got a not enough space warning, so I increased the hard disk size in the hardware tab and it now has 48GB. However, I still get the not enough space warning, so I'm not sure how to attach the extra storage...
Thanks I’ll probably try to give this a shot tonight. I use Jellyfin but I expect steps 1-5 to be the same.
In step 4 you included lxc.cgroup2.devices.allow: c 29:0 rwm
but that wasn’t in your output from step 2. Where does that line come from or is it the same regardless of the rest?
Wow, that really sucks. Thanks for trying. Maybe type it out in a word processor and save it before trying it again? I don't know if there's a length restrictions to posts or what, but might need to break it up?
Can you also provide the guide that you followed? I tried to do this once before with no success. Was reading on it today before coming across this thread and came across same info as OP stating GVT-g isn't supported.
That's it! It looks like I selected that when I was trying to passthrough hwa for Jellyfin. I removed that pci from the VM, rebooted pve and now it's available. Ran zpool attach rpool and now it's attached and resilvered with no issues. Thanks for the help @avw !
That might be it? If I'm reading this right, the kernel driver for one of them is vfio-pci
# lspci -k
00:00.0 Host bridge: Intel Corporation Device 4c53 (rev 01)
Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7d18
00:02.0 VGA compatible controller: Intel Corporation Device 4c8b...
I booted into a Linux Mint live disk and the nvme disks look completely fine. They show up in lsblk and pass smartctl. Seems like this has to be a proxmox issue.
https://pastebin.com/ZBAs224e
Here's the output of /etc/modules-load.c/modules.conf
/etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Modules required for PCI passthrough
vfio...
I left a monitor connected to the server. After boot it shows this-
[52.766252] nvme nvme1: failed to set APST feature (-19)
Also, not sure if it's related but during boot is this message:
[FAILED] Failed to start Load Kernal Modules.
See 'systemctl status systemd-modules-load.service' for...
Alright. Did a self-test in the BIOS for each drive and both passed. Removed and reinstalled the drives, did the self-test with each individually installed and the other removed, all passed. So there doesn't seem to be anything physically wrong with the drives.
I ran that command and have it removed from the zpool. The second disk doesn't even show up in lsblk. I'm not sure where else to look for it to do a SMART test.
Thanks for the response!
# zpool status rpool
pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using...
I just noticed this and not sure what I need to do. Does the disk physically need to be replaced? I'm still learning Linux and not sure how to resolve this. Help appreciated!
@dkking @js2 Hi, new to linux and Proxmox and following this guide because I had the same issue. I'm at this line- 11)install pve-header apt install pve-headers-$(uname -r)
I'm sure I'm missing something and supposed to ensure something specific in the italics part, but entering it as is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.