Hello to all,
After an unexpectedly shutdown Proxmox server lost the ZFS Pool.
I can see from Datastore's Storage section my old ZFS Storage, but the pool is missing from the server.
Disks Health is good (I checked through Raid Controller).
All my VMs were stored in there.
Is it possible to...
Hey, I hope anyone might be able to help and point me in the right direction to solve the following issue.
I have a zfs-pool in my machine which I use for mass storage. While I knew I was running low on storage, I expected to still have a few gigs available. But when I tried to rename a file on...
Hallo zusammen,
wir betreiben einen PBS mit ZFS Datastor. Leider ist uns das ZFS vollgelaufen.
Ich habe bereits einige alte VM Backups und Namespaces via UI gelöscht und andere alte VM Backups per CLI gelöscht. Außerdem habe ich nach Recherche hier im Forum auch versucht ältere Verzeichnisse...
Public Service Announcement: Anyone using Western Digital/SanDisk NVMe drives should probably have a look over this new firmware update:
https://support-en.sandisk.com/app/answers/detailweb/a_id/51469
Note that Host Memory Buffer (HMB) problems with these WD NVMe drives have been reported with...
Summary: I need to know how to remove a zpool from one machine in a cluster while leaving it on the others.
I have a three node cluster, of two matching main servers (M1 and M2) and a third smaller (S3) machine which provides the third vote for failover and hosts a limited range of VMs. Each...
Does anyone know what could cause this host crash:
We have a linux guest VM running multi-day simulations. The guest reported the following:
Heaps of space in zpool:
My current guess is that an online resize of sdb disk image earlier in the day looked good but at the point the additional...
Hello dear Proxmox Backup users!
One of our 4 HDDs failed in our Proxmox Backup Server.
We replaced this with a new HDD and then re-silvered the whole pool.
Now, after resilvering, I realised I may have made a mistake.
See this screenshot:
You can see here that every HDD has the following...
So I have a few systems that are built out of cheap hardware. I am looking to try and get as much performance out of them as I can. Right now I am getting around 100mb/s write and 25mb/s read. Each system has different SSDs but fdisk shows 512 byte sector size for all.
Disk /dev/sda: 465.76...
I run a zfs health check and this morning i got this message
The number of I/O errors associated with a ZFS device exceeded acceptable levels. ZFS has marked the device as faulted.
impact: Fault tolerance of the pool may be compromised.
eid: 154
class: statechange
state: FAULTED...
Hi again,
I am still tinkering with the project of a TrueNAS VM on PVE 8.1.4. using a HP DL380 G10 with a PCIe/NVMe/U2 riser card and cage. I already posted here with not much success before getting some more info at the TrueNAS forums over here.
It turned out that some people there would...
I screwed up big time and lost my ZFS pool ! How can I retrieve it, I need help ?
I’ve been using PVE v8 for few months at home, mainly for windows & macOS VMs and testing new systems, inc Plex & TrueNAS Core
It was going great and recently added Proxmox Backup Server in order to backup the...
I want to using two U.2 pcie Nvme SSD as the fast disk of tens of VMs ( disk Image ) in one PVE host. In order to minimize the risk of “data loss due to disk corruption”, I would like to use raid 1.
Then, what is the best filesystem of raid1?
ZFS with raid1.
or
F2FS on MDRAID with raid1.
I...
Hello,
back in the day I used to utilize LVM for storage. A couple of PVE hosts I've setup were nothing special regarding their hardware and articles regarding file system format seemed to advise using LVM for a variety of reasons.
Now I've got a one year old Proliant DL360 Gen 10 with two...
I have some filesystem questions regarding PVE, ZFS and how they interact with VMs.
The storage system I'm trying to achieve consists of 5 physical disks:
Physical Disk
Purpose
RAID
Disk 1 + Disk 2
PVE + mirror
ZFS mirror
Disk 3 + Disk 4
VMs and container storage
a mix of striping and...
Hi everyone,
in my Proxmox I've created a storage dir to store all kinds of data and made that one a ZFS dataset on its own. I want to be able to configure individual auto-snapshot retention policies this way. Everything works fine, I'm able to create VMs etc. in that dataset and the web-UI...
Hallo zusammen
Wir nutzen Proxmox auf einem Server von Thomas Krenn mit untenstehenden Specs.
Wir setzen momentan 4 Gitlab-Runner ein und möchten gerne ausbauen. Die IO-Operationen bei Build/Testing/Deployment-Workloads scheinen uns etwas langsam zu sein.
pveperf auf dem einzigen ZFS-Pool...
For years I've used VMware ESXi + HW Raid or VSAN, typically with Dell Openmanage / iDRAC to manage HDDs. Well...Broadcom happened.
I'm trying to completely wrap my head around JBOD / HBA ZFS before I even consider using it.
Like fully understanding SCSI IDs locations, disk serial numbers...
zpool status
root@jkbs-lab:~# zpool status
pool: pool1
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Sep 23...
Hello,
I'm still learning proxmox with zfs and some details are not clear to me. I hope someone could help me to understand better than now.
My proxmox is installed with 32 Go Ram and a single 1 To SSD using ZFS.
To learn I installed a few VM and a CasaOS container with a FTP server using...
Hello,
I currently have several Dell Powerstore 1200T storage arrays (3) which are managed by a Metro Cluster that I use for iSCSI virtualization (VMware to be transparent and everything works fine).
I want to test Proxmox, and I used 3 physical servers to have a cluster of 3 hosts and I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.