My current NAS setup has Proxmox VE running on bare metal and TrueNAS Scale running in a VM.
I created a zfs pool "appspool" from the UI: Datacenter -> Storage -> Add -> ZFS
I then created a TrueNAS scale VM and passed through the disk qm set 900 --scsi2...
Hello,
I have old-ish HP ProLiant Microserver Gen8. It has 4x 3.5" bays (currently populated with 2x WD Reds = /dev/sda, /dev/sdb) + 1x old Laptop 2.5" HDD ( I installed Proxmox OS only = /dev/sdc )
My setup is that I have the Proxmox installed on the single 2.5" HDD, where the whole disk was...
Hello gents, I would like to ask you for an advice after hours of googling without success.
After restarting my home servers when plying with VPN, my 1 month old SSD disk which I am using as a NAS (i know, 1 disk, nas, zfs....who does this) stoped working.
I was unable to access the storage, so...
Hi all,
While working through this problem here, I realized that I needed to fix the layout of the partition in my rpool before proceeding.
/dev/sdb3 and /dev/sda3 are now mirrored partitions (correct) but sda2 should be a EFI boot partition. I'm trying to remove sda2 from the rpool so
I have...
Hi,
What is the correct way to setup local ZFS Pools on each node? I am not looking to share them/ha across nodes, just want to use a local pool on each node for VM's, and used my shared storage for HA (NFS) across the pool.
I have the cluster setup, and when I add a zpool to a node, it looks...
Hello
I have a simple problem : i have a Windows server 2019 on a VM.
Space used on the disk of the VM : 179Go
Size of the VM disk : 1.65To
When i backup the VM with proxmox, my backup size is 919Go. Why ?
I cannot defrag the disk on the VM
For information, I use a zpool storage for the disk...
Hi People,
So what i wanted to achieve is i wanted one central Master VM disk and multiple VMS will be using the same as read only mode.
So my base vm let say vm 100 has windows 10 and lots of other softwares, and my other vms, vm101 to vm105 is using the same disk image of base windows vm...
Good morning,
here is a little sunday morning story with a question regarding Datastore emulation:
My homelab is growing - while I am evaluating options to shrink it down for power consumption reasons. Growing is not always linear and with different new (in this case: used) hardware come new...
I am wondering if it is possible to create a zfs pool on a raid0 mdadm, I love the performance of the latter and the functionality of the former with proxmox.
Hi All,
Hoping to gain a better understanding of why I'm seeing a difference in performance running the same fio command in different contexts.
A little background, my pool originally consisted contained 4 vdevs, each with 2x 3TB. However due to a failing disk on mirror-3 and wanting to...
Hello,
i have 2 zfs pools on my machine (PVE 7.1):
rpool (Mirror SSD)
datapool (Mirror HDD)
Everytime i boot up my machine, i get an error, the import of datapool failed.
A view in the syslog shows everytime the same enty:
Jan 3 13:31:29 pve systemd[1]: Starting Import ZFS pool datapool...
Hello everyone,
I have a zpool (raidz1) built with 3 x 2TB disks.
Some disk start to report some error so I've taken 3 x 6TB disks and I want to replace the old ones.
I've followed these steps:
:~# zpool status -v 'storagename'
pool: 'storagename'
state: ONLINE
status: Some supported...
Hi all,
I currently have 4x 1TB NVMe drives running as a stripped mirror for my VM disk storage. I am in the process of replacing them all for 2TB drives.
I'm happy with using the zpool replace command. I have replaced two so far, the second one is just re-silvering now. I currently only have...
Hallo liebes Proxmox Forum. Ich würde gerne meine 2x 1TB SATA SSD in einem "RAID0" als mirror mit meiner 1x 2TB NVME SSD in einem mirror Verbinden. So das ich 2TB SSD Speicher habe. So das entweder die NVME oder eine der SATA SSDs kaputt gehen kann.
Vielen dank schon mal für die Info.
I created a Windows Server 2019 VM with a 120GB C: drive. After the VM was created I added a 60TB drive and formatted it with NTFS. This 60TB drive is a raw file on a ZFS pool. The issue I'm having is that even though in Windows the D: drive is showing 11TB used and 49TB free when I look at the...
Hello,
Urgently need help. I have research work data on it:oops:
Story:
My GPU passthrough has been working well on both 6.2.55 and 65. Because I used update-initramfs -u -k all, both .55 and .65 has GPU frame buffer disable thus no host display out. I was trying to reverse GPU passthrough...
Hello,
I installed zfs-zed via # apt-get install zfs-zed and edited /etc/zfs/zed.d/zed.rc to uncomment ZED_EMAIL_ADDR="root" as described here: https://pve.proxmox.com/wiki/ZFS_on_Linux#_activate_e_mail_notification
zpool status is degraded, but I do not get an E-Mail. Other notifications from...
Hi,
one of my disks reported a SMART-error, so I set out to replace the failed disk - but failed.
Here's what I did, as reported by zpool history, with some remark indented:
2021-04-12.09:48:43 zpool offline rpool sdc2
I removed the failed disk and shut down the system, replacing the drive...
Hi,
I have a ZFS rpool with 4 x 10 Tb drives on raidz1-0:
root@pve:~# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 1 days 04:24:17 with 0 errors on Mon Mar 15 04:48:19 2021
config:
NAME STATE READ WRITE CKSUM
rpool...
We have a working 4 hosts PVE 6.3-3 cluster installed with ZFS. All those machine are enterprise gear on working condition. With those machine we archive, digitize and produce very large video files to different format. These are often intensive processes that require a lot of resources and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.