Hi, I can't find anything here and in the docs and I noted that my rootdisk is slowly filling up with ZFS rpool snapshots / rollback points.
I admit I never had to use them, and I'm sure they are useful .. But I don't need rollback to before Christy and I haven't found a way to configure a...
Hey so thank you ahead of time for reading this. So in my homelab I run a 3 node Proxmox and Ceph cluster with all 3 nodes each running on a 2 SSD drive ZFS mirror. About a week ago one of my nodes had a kernel panic, I'm kicking myself that I didn't take a screen shot but I rebooted and haven't...
Hi everyone,
clearly i am doing something wrong because every blog post/forum i read has almost the same instructions.
my skill level is 5/10 - still learning, first time had the disk failed :(
zpool replace pool_name full_old_disk_path full_new_disk_path
zpool replace rpool...
Good morning,
When I'm in Proxmox and I'm going to look at my server information on the ZFS. It tells me that I have 112TB of space and 99.11TB free. Which is normal. But when I go to see the information of the rpool that I created. It tells me I have 94.14TB and 73.68TB used. Yet the two...
Good day
So I have a broken RAID-5 OS disk in my proxmox installation, and I would like to replace rpool with a new pool (rpool2?) that runs on better servergrade sata SSD instad of the old spinning rust, yes the old disk is broken, not just a temporary error.
I know that there are some tools...
This issue has been solved. This thread is for anybody having slow I/O performance and searching for keywords. The cause might be ZFS trimming the rpool. I’m running proxmox version 7.3-1 on a Supermicro A2SDi-8C+-HLN4F with HP SSD EX920 1TB for the rpool.
My proxmox node was unresponsive. VMs...
Hi all,
While working through this problem here, I realized that I needed to fix the layout of the partition in my rpool before proceeding.
/dev/sdb3 and /dev/sda3 are now mirrored partitions (correct) but sda2 should be a EFI boot partition. I'm trying to remove sda2 from the rpool so
I have...
Hi Everyone. I need some ZFS help. I recently got a couple new drives to upgrade my main rpool which consists of 2 drives in a mirror config. I was successfully able to rebuild the rpool with autoexpand by hotswapping in the new drives and resilvering. However, i forgot to copy the boot...
Hey there,
although I'm fairly new with Proxmox, I do have some ZFS/Linux background.
I was wondering whether there is a way to change the default ZFS rpool dataset layout:
root@proxmox:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.64G 76.8G 104K...
I have a cluster with all nodes on ext4 root.
I would like to add a few (new) nodes with zfs raid1 root. I found out that there are some issues, and I would like to know what's the best way to proceed:
- The local-lvm storage shows up with a "?" on the machine with zfs root, which is...
Hi guys,
Nowadays I usually boot with CSM (Compatibility Support Module) on via Legacy mode.
However I would like to boot using UEFI mode so when I turn of CSM and boot it will throw me into initramfs and throw me an error.
The error is "Cannot import 'rpool' unsupported version or feature"...
In the Proxmox VE 6.4 release notes known issues section it says:
Please avoid using zpool upgrade on the "rpool" (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.
ZFS 2.0.4 is also available in 6.3-6 which I'm...
I am absurdly beginner who just use proxmox to create 4 or 5 VMs to use in my ISP. Today, my system just stoped and now i'm just stuck at initramfs. I've tried all solutions i found in this forum and no one worked.
I dont have any idea how to proceed if this system just don't start anymore...
When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS.
When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool.
please assist
This morning we had some power outage issue on our server room,
all nodes recovered except one.
This node is part of (one of three) ceph cluster , (pools are set to replication 3) so the data is safe and the cluster is stable, (excluding ceph warning)
Any idea how i can fix it?
For my rpool, I have two mirrored NVME SSDs added by-id. However, it seems that one of the ids has changed for some reason, and I am now showing degradation of my rpool:
root@Skywalker:/dev/disk/by-id# zpool status -x
pool: rpool
state: DEGRADED
status: One or more devices could not be used...
Hello,
I'm wondering what the proxmox teams recommendations are for ZFS. I see proxmox now support UEFI boot and supposedly rpool supports all ZFS feature flags in this configuration.
During installation of Proxmox the "rpool" is generated. The ROOT is there for booting. And rpool/data is...
Hi,
for various reasons I prefer my data to be encrypted on disks. Until now I used a zfs on luks setup. Which worked pretty good, but had its quirks and some other drawbacks.
So I was really happy to see zfs native encryption and its support with Proxmox 6 (Thanks for that!).
To have my VMs...
Hey all,
Running into some problems getting the system to work properly now that release is out and I want to make a ZFS Mirror instead of just a single drive. For clarity, the system showed no issue booting using ext4 or xfs on NVME or SATA SSDs (if used by themselves) but now that I'm...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.