This issue has been solved. This thread is for anybody having slow I/O performance and searching for keywords. The cause might be ZFS trimming the rpool. I’m running proxmox version 7.3-1 on a Supermicro A2SDi-8C+-HLN4F with HP SSD EX920 1TB for the rpool.
My proxmox node was unresponsive. VMs...
While working through this problem here, I realized that I needed to fix the layout of the partition in my rpool before proceeding.
/dev/sdb3 and /dev/sda3 are now mirrored partitions (correct) but sda2 should be a EFI boot partition. I'm trying to remove sda2 from the rpool so
Hi Everyone. I need some ZFS help. I recently got a couple new drives to upgrade my main rpool which consists of 2 drives in a mirror config. I was successfully able to rebuild the rpool with autoexpand by hotswapping in the new drives and resilvering. However, i forgot to copy the boot...
although I'm fairly new with Proxmox, I do have some ZFS/Linux background.
I was wondering whether there is a way to change the default ZFS rpool dataset layout:
root@proxmox:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.64G 76.8G 104K...
I have a cluster with all nodes on ext4 root.
I would like to add a few (new) nodes with zfs raid1 root. I found out that there are some issues, and I would like to know what's the best way to proceed:
- The local-lvm storage shows up with a "?" on the machine with zfs root, which is...
Nowadays I usually boot with CSM (Compatibility Support Module) on via Legacy mode.
However I would like to boot using UEFI mode so when I turn of CSM and boot it will throw me into initramfs and throw me an error.
The error is "Cannot import 'rpool' unsupported version or feature"...
In the Proxmox VE 6.4 release notes known issues section it says:
Please avoid using zpool upgrade on the "rpool" (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.
ZFS 2.0.4 is also available in 6.3-6 which I'm...
I am absurdly beginner who just use proxmox to create 4 or 5 VMs to use in my ISP. Today, my system just stoped and now i'm just stuck at initramfs. I've tried all solutions i found in this forum and no one worked.
I dont have any idea how to proceed if this system just don't start anymore...
When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS.
When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool.
This morning we had some power outage issue on our server room,
all nodes recovered except one.
This node is part of (one of three) ceph cluster , (pools are set to replication 3) so the data is safe and the cluster is stable, (excluding ceph warning)
Any idea how i can fix it?
For my rpool, I have two mirrored NVME SSDs added by-id. However, it seems that one of the ids has changed for some reason, and I am now showing degradation of my rpool:
root@Skywalker:/dev/disk/by-id# zpool status -x
status: One or more devices could not be used...
I'm wondering what the proxmox teams recommendations are for ZFS. I see proxmox now support UEFI boot and supposedly rpool supports all ZFS feature flags in this configuration.
During installation of Proxmox the "rpool" is generated. The ROOT is there for booting. And rpool/data is...
for various reasons I prefer my data to be encrypted on disks. Until now I used a zfs on luks setup. Which worked pretty good, but had its quirks and some other drawbacks.
So I was really happy to see zfs native encryption and its support with Proxmox 6 (Thanks for that!).
To have my VMs...
Running into some problems getting the system to work properly now that release is out and I want to make a ZFS Mirror instead of just a single drive. For clarity, the system showed no issue booting using ext4 or xfs on NVME or SATA SSDs (if used by themselves) but now that I'm...
I've dumped my reliable XPEnology setup for the sake of being more flexible (and legally compliant) with Proxmox and I'm slowly starting to regret it. I hope I can get it fixed with your help though... Here's what I've got and what I have tried:
* Proxmox 5.3-2 installed via USB...
After a big storm my two SSD RAID 1 crash (one dead ans one with too many bad sector) ... I'm not realy lucky ...
I have replace the both SSD and re install proxmox RAID 1.
My raid 5 is OK and after install mdadm all my VG/LV are ok and active.
My question : How I could retore my VM with...
Is it possible to use rpool/ROOT/pve-1,rpool/ROOT,rpool
as a ZFS?
Need some advice regarding a simple and probably silly question. All of my systems have a small SSD as the system disk and a secondary single larger SSD for VM's. If I destroy the rpool which is created by default and recreate it on /dev/sdb will I screw anything up?
2nd SSD is ZFS. I basically...
We've had a disk fail in our ZFS rpool, were looking for the procedure to replace the disk.
So far we've found a couple of wiki's however, I thought i'd run it by you guys and see if its still correct.
We are running Proxmox VE 4.4.
1) Replace the physical failed/offline drive...