Hi guys,
Nowadays I usually boot with CSM (Compatibility Support Module) on via Legacy mode.
However I would like to boot using UEFI mode so when I turn of CSM and boot it will throw me into initramfs and throw me an error.
The error is "Cannot import 'rpool' unsupported version or feature"...
In the Proxmox VE 6.4 release notes known issues section it says:
Please avoid using zpool upgrade on the "rpool" (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.
ZFS 2.0.4 is also available in 6.3-6 which I'm...
I am absurdly beginner who just use proxmox to create 4 or 5 VMs to use in my ISP. Today, my system just stoped and now i'm just stuck at initramfs. I've tried all solutions i found in this forum and no one worked.
I dont have any idea how to proceed if this system just don't start anymore...
When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS.
When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool.
please assist
This morning we had some power outage issue on our server room,
all nodes recovered except one.
This node is part of (one of three) ceph cluster , (pools are set to replication 3) so the data is safe and the cluster is stable, (excluding ceph warning)
Any idea how i can fix it?
For my rpool, I have two mirrored NVME SSDs added by-id. However, it seems that one of the ids has changed for some reason, and I am now showing degradation of my rpool:
root@Skywalker:/dev/disk/by-id# zpool status -x
pool: rpool
state: DEGRADED
status: One or more devices could not be used...
Hello,
I'm wondering what the proxmox teams recommendations are for ZFS. I see proxmox now support UEFI boot and supposedly rpool supports all ZFS feature flags in this configuration.
During installation of Proxmox the "rpool" is generated. The ROOT is there for booting. And rpool/data is...
Hi,
for various reasons I prefer my data to be encrypted on disks. Until now I used a zfs on luks setup. Which worked pretty good, but had its quirks and some other drawbacks.
So I was really happy to see zfs native encryption and its support with Proxmox 6 (Thanks for that!).
To have my VMs...
Hey all,
Running into some problems getting the system to work properly now that release is out and I want to make a ZFS Mirror instead of just a single drive. For clarity, the system showed no issue booting using ext4 or xfs on NVME or SATA SSDs (if used by themselves) but now that I'm...
Hello there,
I've dumped my reliable XPEnology setup for the sake of being more flexible (and legally compliant) with Proxmox and I'm slowly starting to regret it. I hope I can get it fixed with your help though... Here's what I've got and what I have tried:
* Proxmox 5.3-2 installed via USB...
Hi,
After a big storm my two SSD RAID 1 crash (one dead ans one with too many bad sector) ... I'm not realy lucky ...
I have replace the both SSD and re install proxmox RAID 1.
My raid 5 is OK and after install mdadm all my VG/LV are ok and active.
My question : How I could retore my VM with...
Is it possible to use rpool/ROOT/pve-1,rpool/ROOT,rpool
as a ZFS?
cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,images,rootdir,vztmpl
maxfiles 0
dir: zpool-big
path /zpool-big
content backup,vztmpl,iso,rootdir,images...
Need some advice regarding a simple and probably silly question. All of my systems have a small SSD as the system disk and a secondary single larger SSD for VM's. If I destroy the rpool which is created by default and recreate it on /dev/sdb will I screw anything up?
2nd SSD is ZFS. I basically...
Hi All,
We've had a disk fail in our ZFS rpool, were looking for the procedure to replace the disk.
So far we've found a couple of wiki's however, I thought i'd run it by you guys and see if its still correct.
We are running Proxmox VE 4.4.
1) Replace the physical failed/offline drive...
Hey all,
I installed Proxmox to a mirrored pair of SSD's quite a while ago. I noticed almost immediately that the resultant rpool used old fashioned linux device names:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0...
4 months ago I installed PVE v4.4 on an eXT4 filesystem SSD. I put about 10 different containers on it and was the only user. I didn't use ceph or HA, and I had no other clusters--it was just one node with about 10 different LXCs. All containers were under 50% usage in their allotted storage...
Hey guys!
Did an update last night:
apt-get update
apt-get dist-upgrade
after that, ZFS wanted a
zpool upgrade -a
which required a reboot.
But after the reboot, i was stuck at busybox:
i tried:
zpool import -c /etc/zfs/zpool.cache -N rpool
exit
and it worked.
Than i performed the
zpool...
Hi,
We are running a new install of Proxmox VE 4.3-12
Our configuration is Dual E5-2630v4 w/ 256GB RAM & 8 x Samsung PM863a Drives.
We did our install using the Proxmox 4.3 ISO installer and completed a RAID Z+2 Configuration with our 8 Drives and all seem to go smoothly.
We've found some...
Dear Proxmox community,
Dear Proxmox support team,
Today I upgraded to PVE 4.2 and I'm now having boot issues because of device mappings mismatch. I have installed the root file system on ZFS during the initial installation back in 2015. Now, when trying to boot the PVE I receive:
Message...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.