I have a situation where things got out of control on a standalone PVE system with a 500gb SSD as ZFS storage. I need to somehow pull the drive and clone it to a larger 1tb drive for the time being as a stopgap measure in a time critical situation. How can this be done so that the size is scaled...
I'm about to replace the HBA card used for my hdd_pool, so I want to temporarily move the ZVOL virtual drives of my VMs from the hdd_pool to the ssd_pool, just to be safe.
I tested with one of the smaller virtual drives that's set to look like a 32G drive to the guest OS and has 7.52G actually...
Hey all, hopefully a quick answer. I am looking to get 3 SSD's into raidz0, this will be a vm pool to reduce the ridiculous io delay I have now. I do have a solid redundancy strategy and backup system already in place. Would someone be able to point me to the cli commands or gui setup to make...
When it comes to storage, we have been using ZFS over iSCSI in our clusters for years.
Now for a couple of new projects, we require S3 compatible storage and I am unsure about the best way to handle this situation. I am tempted to use MinIO, but I've read mixed reviews about it and Ceph seems...
I currently have Proxmox installed on a zfs single disk, and want to move it to a zfs mirror. The tricky part is that the disk it is on now will be one of the disks in the mirror. How would I move the install to a temporary location, create the mirror with the two disks, and then move the...
I've a PVE box setup with two zfs pools :
root@pve:~# zpool status -v ONE_Pool
scan: scrub in progress since Tue Nov 29 11:48:09 2022
194G scanned at 6.91G/s, 2.67M issued at 97.7K/s, 948G total
0B repaired, 0.00% done, no estimated completion time...
I have a very specific and strange problem and ita kinda urgent to solve it in a timely manner. Yesterday I encountered a kernel panic on my proxmox server
looking at server management logs apparently there was a power outage and ups was out of service for maintenance
booting to the default...
I am new to proxmox and LXCs.
One of my LXCs suddenly won't boot anymore. Three others are working fine.
I did some research on my own and found two possibilities: ZFS problems or a kernel update.
But I'm using ext4, so maybe it's the problem of an "apt full-upgrade"?
The error message...
I have an unusual problem with the ZFS virtual machine block size (zvol).
Proxmox version 7.3
I have a configured proxmox cluster consisting of two servers plus a qdevice device.
I set up a datastore (ZFS) from the webgui of proxmox called SSD_ZPOOl_1 and set its block size to...
I have an environment with PVE 7.3, where I have disks in zfs.
I mounted it in PVE as a directory as I currently use qcow2!
However, I always used it in qcow2 format for the ease of snapshots.
The question is.
Do I lose a lot of performance using qcow2 on zfs storage?
What is the right way...
I'm currently running proxmox on two 8TB drives, using zfs, and they're mirrored. My `zpool status` looks like this:
scan: scrub repaired 0B in 1 days 03:13:29 with 0 errors on Mon Nov 14 03:37:30 2022
also die Ausgabe auf meinem PVE mit df -h ergibt u. a.:
rpool/ROOT/pve-1 1.4T 8.5G 1.3T 1% /
Heißt also der ROOT Bereich ist 1.4TB groß? Kann das verändert werden?
Hatte mit ZFS bisher nichts zu tun, deshalb verstehe ich nicht ganz wie der Pool hier...
I'm sure this question has been asked before, but I can't decide what would be best. More RAM, and additional L2ARC - and in that case which type?
Proxmox is currently running on a HP DL380 G9 with one PCI-E slot available, so I could get a card that supports NVMe and add around 250 GB fast...
I am running proxmox 7.2.11
zfs-zed/stable,now 2.1.6-pve1 amd64
zfs-initramfs/stable,now 2.1.6-pve1 all
zfsutils-linux/stable,now 2.1.6-pve1 amd64
After taking out the only disk out of a hot swap bay in a single disk pool.
zpool status still listed the pool as online including that disk...
Hi everyone, I am hurting for memory at the moment and I need a temporary fix until my Epyc parts arrive in December. I posted a question regarding ZFS memory usage and you guys pointed me to the right place to change how much memory the host system uses for ZFS. I have that ZFS volume shared to...
I'm attempting to migrate one of my VMs to an LXC container,
The VM has a ~8TB ZVOL disk attached to it, and I want to move all the data on there into the LXC container
I'd like to avoid using SMB/NFS/iSCSI to do it, instead I'd like to mount the ZVOL to the host or the LXC and copy the...
Hi everyone, I am new to Proxmox and I have been searching around on how to solve this issue. I have learned that ZFS requires ECC memory, I have been using ZFS on Ubuntu server for a year now with standard DDR4. I ordered an Epyc 7402 and Supermicro H12SSL-i, I will buy ECC memory now that I...
I'm currently doing some tests with Proxmox and TrueNAS (both on dedicated machines connected via 10gbit) and looking at the Proxmox backup function got me thinking:
Would it make any sense to send uncompressed Backups to a TrueNAS volume with ZSTD compression (instead of doing the ZSTD...
I started getting notifications every hour or so from my proxmox machine:
ZFS has finished a scrub:
time: 2022-11-13 00:24:01+0100
status: One or more devices has experienced an error resulting in data
I read some posts about this topic and I had to admit I am confused now. So excuse my maybe redundant question, but I hope it becomes clear for me then.
Is there a way to embed the host ZFS more effectivly in a guest VM, as than create a drive in the guest and (for example) install...