So I have a few systems that are built out of cheap hardware. I am looking to try and get as much performance out of them as I can. Right now I am getting around 100mb/s write and 25mb/s read. Each system has different SSDs but fdisk shows 512 byte sector size for all.
Disk /dev/sda: 465.76...
I run a zfs health check and this morning i got this message
The number of I/O errors associated with a ZFS device exceeded acceptable levels. ZFS has marked the device as faulted.
impact: Fault tolerance of the pool may be compromised.
eid: 154
class: statechange
state: FAULTED...
Hi again,
I am still tinkering with the project of a TrueNAS VM on PVE 8.1.4. using a HP DL380 G10 with a PCIe/NVMe/U2 riser card and cage. I already posted here with not much success before getting some more info at the TrueNAS forums over here.
It turned out that some people there would...
I screwed up big time and lost my ZFS pool ! How can I retrieve it, I need help ?
I’ve been using PVE v8 for few months at home, mainly for windows & macOS VMs and testing new systems, inc Plex & TrueNAS Core
It was going great and recently added Proxmox Backup Server in order to backup the...
I want to using two U.2 pcie Nvme SSD as the fast disk of tens of VMs ( disk Image ) in one PVE host. In order to minimize the risk of “data loss due to disk corruption”, I would like to use raid 1.
Then, what is the best filesystem of raid1?
ZFS with raid1.
or
F2FS on MDRAID with raid1.
I...
Hello,
back in the day I used to utilize LVM for storage. A couple of PVE hosts I've setup were nothing special regarding their hardware and articles regarding file system format seemed to advise using LVM for a variety of reasons.
Now I've got a one year old Proliant DL360 Gen 10 with two...
I have some filesystem questions regarding PVE, ZFS and how they interact with VMs.
The storage system I'm trying to achieve consists of 5 physical disks:
Physical Disk
Purpose
RAID
Disk 1 + Disk 2
PVE + mirror
ZFS mirror
Disk 3 + Disk 4
VMs and container storage
a mix of striping and...
Hi everyone,
in my Proxmox I've created a storage dir to store all kinds of data and made that one a ZFS dataset on its own. I want to be able to configure individual auto-snapshot retention policies this way. Everything works fine, I'm able to create VMs etc. in that dataset and the web-UI...
Hallo zusammen
Wir nutzen Proxmox auf einem Server von Thomas Krenn mit untenstehenden Specs.
Wir setzen momentan 4 Gitlab-Runner ein und möchten gerne ausbauen. Die IO-Operationen bei Build/Testing/Deployment-Workloads scheinen uns etwas langsam zu sein.
pveperf auf dem einzigen ZFS-Pool...
For years I've used VMware ESXi + HW Raid or VSAN, typically with Dell Openmanage / iDRAC to manage HDDs. Well...Broadcom happened.
I'm trying to completely wrap my head around JBOD / HBA ZFS before I even consider using it.
Like fully understanding SCSI IDs locations, disk serial numbers...
zpool status
root@jkbs-lab:~# zpool status
pool: pool1
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Sep 23...
Hello,
I'm still learning proxmox with zfs and some details are not clear to me. I hope someone could help me to understand better than now.
My proxmox is installed with 32 Go Ram and a single 1 To SSD using ZFS.
To learn I installed a few VM and a CasaOS container with a FTP server using...
Hello,
I currently have several Dell Powerstore 1200T storage arrays (3) which are managed by a Metro Cluster that I use for iSCSI virtualization (VMware to be transparent and everything works fine).
I want to test Proxmox, and I used 3 physical servers to have a cluster of 3 hosts and I...
Hallo,
ich habe heute ein PVE aus meinem Cluster auf die neuste Version Upgeradet. ~8.0.3 auf 8.2.3
Ich habe im BIOS x2APIC deaktiviert.
Nun habe ich folgenden Fehler und komme nicht weiter.
Hat jemand ein Tipp für mich?
danke
Kernal: 6.8.8-4
Fehler:
Timed out for waiting the udev queue...
Hi, I'm migrating some large LXC containers from one ZFS array to another and I'm noticing some odd behavior. It seems that for some reason, whatever system that handles LXC migrations is writing the data in 1k chunks... I mean, credit to ZFS for managing nearly 140k 1kB IOPS on spinning rust...
My initial research is pointing at padding overhead (and it is possible thin provisioning didn't get enabled at creation), but this also seems wildly off compared to other examples.
The drive called "zfs" is a made up of six 1.96TB SSD drives in a RAIDZ. When I go to the summary for the zfs...
I have inherited a LSI SAS9200-8e 6Gb/s SAS PCIe x8 HBA in a Dell R710 server connected to a EMC KTN-STL3 JBOD with 15 1TB drives.
I don't see the drives in Proxmox or in the Debian console. Can someone point me to a guide that explains setting up the server BIOS, the HBA's BIOS and/or...
Hello folks,
I just installed Proxmox VE 8.2.4 on 2 machines and set them in a cluster:
Machine 1: EXT4 configuration - Mini pc: 4 x Intel(R) Celeron(R) J4125 CPU @ 2.00GHz (1 Socket | 8GB RAM | SSD M2 240GB + SATA3 480GB in EXT4 configuration
Machine 2: ZFS-RAID1 - Mini pc: 8 x Intel(R)...
I have an encrypted dataset which contains resources Proxmox needs (e.g. vm storage).
The passphrase is in /etc/zfs/datasetname.phrase and that path is stored in zfs keylocation
It gets properly mounted when I zfs mount -a -l without me needing to enter the passphrase.
This is not the boot...
Hallo!
Unter PVE 8.2.4 möchte ich noch eine weiter vm-disk anlegen. Die GUI sagt, dass mein Pool voll ist (siehe Screenshot).
Aber ein beherztes zfs list gibt aus, dass noch 47.gb frei wären. Das ergäbe auch meine überschlagsweise Berechnung der Größen die in der GUI angeben sind (siehe...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.