Hello, I'm new to the forum and new to using Proxmox, I have a problem that from time to time automatic backups are made with the "@--head--" mark (what is that? :) ), they take up space and I can only delete them from the terminal, but after some time they appear again. Could someone tell me...
Hi, I am running Proxmox Backup Server 2.4-3 in a VM on a TrueNAS Server.
So far I had connected a local path from TrueNas Scale into the PBX using NFS, mounted as /nfs/pbx. For performance reasons, I would like to switch to ZFS using a zvol provided from TrueNAS, which I added now as an...
As I've been working on optimizing my Proxmox setup, I found myself wondering about the expected "Timing buffered disk reads" for NVMe and SSD drives in this environment.
I've been using the hdparm -tT command to measure the read speeds of my storage devices, including both NVMe SSDs and SATA...
I have a setup that I believe should be capable of much higher read speeds than what I am currently achieving, and I'm hoping the community can help me troubleshoot and get the most out of my hardware.
Here are the details of my setup:
HP ML350 Gen10
CPU: Xeon 3106 1.7GHz (single...
Today, I made some changes to my system. Previously, I had two 250GB SSDs directly connected to the motherboard as boot drives and configured them into a RAID1 with Btrfs.
I also had two 1TB HDDs connected to the motherboard, serving as a ZFS RAID1 for non-essential data (not for...
ich habe heute einige Änderungen an meinem System vorgenommen. Bisher hatte ich zwei 250GB SSDs, die als Bootplatten direkt am Mainboard angeschlossen waren. Diese habe ich zu einem RAID1 mit Btrfs konfiguriert.
Zusätzlich hatte ich noch zwei 1TB HDDs am Mainboard angeschlossen...
There is an ongoing discussion about slow encrypted ZFS performance, e.g. here: https://github.com/openzfs/zfs/issues/15245 and here: https://github.com/openzfs/zfs/issues/15276
Obviously this is due to a regression introduced in kernel 5.15.0-82 and discussed here...
We have a Raid 10 server with 4x 2Tb HDDs (spinning disks). The installation runs well with MDRaid 10 and LVM-Thin/Ext4. Overall, it's a static file server that's part of our CDN infrastructure.
The question now is whether ZFS can take the place of MDRaid without compromising...
Hi guys, we have a small server at home running Proxmox 7.4-16 with root on a mirror zfs pool with two SSD.
One SSD died resulting in a degradated zpool, server anyway is still running and can boot.
I've readed the Proxmox ZFS documentation about how to restore the mirror but we have some...
I'm testing on my homelab some features on PVE 8.0 and i make a simple but non impossible misstake,
I was setting the RAM to zfs to 4GB on /etc/modprobe.d/zfs.conf with the argument :
options zfs zfs_arc_max=4294967296
and I run the command :
root@pve:/# zpool status -v
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
I just got a Dell Optiplex 7050 with 7th gen Intel I5 and 500 GB SATA hdd.
Will add 4x16 GB RAM and a 2TB SSD.
Plan to use this for some Docker/LXC for development/testing and VMs for Plex, GitHub runner, as well as various Unix and Windows VMs(on demand for development and testing). Though...
PBS documentation about backup verification says:
IMHO it would be a better idea to do a "zfs scrub" on the data store on a regular basis instead of re-verification of backups, wouldn't it? ZFS has everything included to save your data (and even fix it again, when on a redundant array)...
I have the storage on my proxmox host using ZFS (not even sure if that a good idea? It's a M.2 SSD 2TB mostly for Plex)
-- should I use another filesystem ?
on the host I can access the file at /Storage using SSH.
I have Plex runing on one LXC and connected with "pct set 100 --mp0...
I think everything runs smooth but i would like to get confirmation if it is really ok. Setup has flash devices (actually nvme samsung and kioxia) and they are directly running on pcie (i.e. without some kind of raid controller etc although raid controller setup + nvme seems to be rare case...
Hi, I din't find a lot of Information on the exact practice on how to do it.
I have a Server with only 6 SSD Slots (2 for the Boot Disk, 4 for the Storage Disk)
I have 4x 1TB SSD inside a Raid 10 ZFS Pool, and I want to upgrade those to 2TB, if possible without Downtime?
My current procedure...
I'm really wondering why the Proxmox Installation Wizard doesn't offer an option for ZFS without any RAID(Z) setup.
Some benefits of ZFS rely heavily on RAID(Z), but many don't: Copy-on-write, integrity checksums, snapshot features, deduplication, native block-level encryption, etc...
I'm relativly new to ZFS and I've read and search for a direct answer but got very confused with ZFS storage types in proxmox and not knowing the path to proceed.
I have the following drives and datastores to create and would like to know your opinions and advices on the ZFS storage...
So this is what I'd like to do, and I can't quite figure it out. I am sure I am missing something simple. I have a fairly large cluster, so this has a lot to do with migration etc.
1. I'd like to install on an M.2 drive. This is simple, and I have accomplished this easily.
2. I am installing on...
I have OpenMediaVault running inside Proxmox, and did 4 HDD disks with passtrough to OMV.
The disks are: 4x WD6003FFBX-68MU3N0 (WD Red Pro (2020) (256MB cache), 6TB)
Specs: 4 cores (HOST) / 8GB / All the HDD's are in passtrough with QM set.
Once I installed everything and installed the ZFS...