Hi,
I would like to talk about a topic already discussed hundreds of times here on the forum, but in slightly more scientific and practical terms:
ZFS on top of Hardware RAID
Introduction
I already know all the warnings regarding this configuration, but since in most cases the references...
1. I have 2 pools, rpool and vpool
2. rpool was created by default during installation, however all my VMs reside on vpool and local-zfs is empty
3. Can I remove local-zfs and merge the freed space into local?
1. I installed PVE8 on a Secured Boot environment and ended up with GRUB. In my logs below, the bootable disk is 9701-213F
2. I'm trying to create a mirror by adding a 2nd drive to rpool
3. Adding to rpool was seamless, but the 2nd drive doesn't boot if I remove the 1st
4. The steps I tried are...
hi everyone, i'm having a blast using proxmox!
I'm facing an issue with how i want to organize my lxc infrastructure. here a brief roundup of the setup:
single node (neuromancer) running both VMs and CTs
zfs pool ("vault") with a few datasets (both used for proxmox storage and user storage)...
I have a zfs pool and its getting full (raidz2), I want to swap 2 of the disks with new ones with higher capacity. I've read the instuctions for the swap & resilvering. i have 2 quick questions
1. Can I swap 2 disks simultaneously OR do i do it 1 at a time?
2. WIll there be an issue with useing...
1. I have 2 NVMe setup as a mirror on proxmox #1 (PVE7), let's call it `apool`
2. If I move the 2 NVMe to proxmox #2 (PVE8), what are the steps required to mount them again as `apool`?
I have a zfs pool with raidz3-0. i have 7 disks in the pool, each 1 TB.
in the node->ZFS screen i see that the pool size is 6.99TB, shows 5.31TB as Free & 1.68 TB allocated.
but the Storage -> zfs-pool summary screen shows Usage is 2.73TB of 3.86TB
(please see attached screen shots)
Can...
So instead of ordering 1 expensive Dell/HP server I decided to buy 2 identical more consumer grade servers. Purpose is that if one fails the other takes over.
Main components:
MSI MPG X670E CARBON WIFI
AMD Ryzen 9 7950X3D Processor (32 threads)
192 gig DDR5 RAM
Crucial MX500 250GB SSD (boot)
2...
Guys, I'm brand new to proxmox since I'm retiring my 2013 Synology and creating a new homelab. My objective is to have a quite reliable system for the next 10 years, running trueNAS and quite a few VMs and LXCs.
I'm using a 7950x on a miniITX X670E-I Asus Strix. I'll use 2x onboard 2TB NVMe...
I can't seem to find a concrete answer for this issue, I see a lot of people commenting on poor disk speed inside VM but I'm not seeing any real ways to get it sorted.
I have a storage pool setup using NVME disks, using fio to benchmark with the following command
fio --ioengine=libaio...
Hi there,
i have an issue with zfs on Proxmox VE 8.1.3
I have 3 zfs pools, one mirror consisting of 2 nvme ssd, one raidz1 consisting of 4 sata hdd and one raidz1 consisting of 4 sata hdd.
Whenever there's high io load on of my hdd pools from one vm i get an io delay of about 60% in the...
(ZFS newbie, I might be using wrong terminology, please correct me if I do )
I'm going to migrate my Unraid server to Proxmox. With regard to storage I could use some help.
The hardware:
2x NVMe 1TB SSD's (Seagate Firecuda)
2x SATA 18TB HDD's (Seagate Exos)
Both the SSD's and the HDD's will...
Hello,
I am quite new to proxmox, but enjoying it so far.
I currently have 3, 2TB SATA hard drives, and I'm trying to set them up in proxmox to be in a RAID where there are 2 main drives with 1 as a hot spare, so if 1 drive fails the hot spare will replace it until I get a new drive. So this...
Hello all
Recently I experienced a very strange issue after upgrading a node from proxmox 6 to 7. We had run the pve6to7 and no error was prompted. The followed the wiki article and made all actions text book. Our node was installed with proxmox iso 6.x and was upgrading the latest version...
Hey allerseits!
Ich bin noch etwas neu bei der Sache, und dementsprechend manchmal etwas überfordert wenn es um das ein oder andere Thema geht, als kleiner disclaimer.
Nachdem darauf verwiesen wird, dass man eher VMs anstatt LXCs für docker container benutzen soll, würde ich gerne eine CouchDB...
for my VMs and CTs I use a zfs pool of Samsung Evo SSDs. I've been running this one for a few years now. but today I found out that I can't create new VMs or CTs on this pool. Moving to this pool is also no longer possible. I get the following message:
cannot share...
If i have all the Disks OK why the spare one is INUSE?
zpool status -v
pool: STORAGE
state: ONLINE
scan: scrub repaired 0B in 11:48:03 with 0 errors on Sun Dec 10 12:12:04 2023
config:
NAME STATE READ WRITE CKSUM
STORAGE...
Dear,
I just read about the silent data corruption bug in OpenZFS here: https://github.com/openzfs/zfs/issues/15526
I was wondering since this is an ancient bug in ZFS if we need to take immidiate action on Proxmox Nodes that have OpenZFS version below 2.2.0? Since I read on the internet that...
I have a 64GB RAM system with 16TB in a RAID HHD storage, so with 8TB used. The OS and VM/CTs are all in a 2TB SSD drive.
Proxmox is set-up to use ZFS and I've noticed that my system has been peaking the memory usage, even though individually the services don't even use 25%. I've also noticed...
I've got another post open where I'm discussing how to setup a new server that will have a total of 36 available disks and get all VMS to be able to access the data within the single large pool. That post is HERE, in that post, @Dunuin threw me a curve ball and suggested dRaid might be the best...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.