Hi,
What is the correct way to setup local ZFS Pools on each node? I am not looking to share them/ha across nodes, just want to use a local pool on each node for VM's, and used my shared storage for HA (NFS) across the pool.
I have the cluster setup, and when I add a zpool to a node, it looks...
Hello
I have a simple problem : i have a Windows server 2019 on a VM.
Space used on the disk of the VM : 179Go
Size of the VM disk : 1.65To
When i backup the VM with proxmox, my backup size is 919Go. Why ?
I cannot defrag the disk on the VM
For information, I use a zpool storage for the disk...
Hi People,
So what i wanted to achieve is i wanted one central Master VM disk and multiple VMS will be using the same as read only mode.
So my base vm let say vm 100 has windows 10 and lots of other softwares, and my other vms, vm101 to vm105 is using the same disk image of base windows vm...
Good morning,
here is a little sunday morning story with a question regarding Datastore emulation:
My homelab is growing - while I am evaluating options to shrink it down for power consumption reasons. Growing is not always linear and with different new (in this case: used) hardware come new...
I am wondering if it is possible to create a zfs pool on a raid0 mdadm, I love the performance of the latter and the functionality of the former with proxmox.
Hi All,
Hoping to gain a better understanding of why I'm seeing a difference in performance running the same fio command in different contexts.
A little background, my pool originally consisted contained 4 vdevs, each with 2x 3TB. However due to a failing disk on mirror-3 and wanting to...
Hello,
i have 2 zfs pools on my machine (PVE 7.1):
rpool (Mirror SSD)
datapool (Mirror HDD)
Everytime i boot up my machine, i get an error, the import of datapool failed.
A view in the syslog shows everytime the same enty:
Jan 3 13:31:29 pve systemd[1]: Starting Import ZFS pool datapool...
Hello everyone,
I have a zpool (raidz1) built with 3 x 2TB disks.
Some disk start to report some error so I've taken 3 x 6TB disks and I want to replace the old ones.
I've followed these steps:
:~# zpool status -v 'storagename'
pool: 'storagename'
state: ONLINE
status: Some supported...
Hi all,
I currently have 4x 1TB NVMe drives running as a stripped mirror for my VM disk storage. I am in the process of replacing them all for 2TB drives.
I'm happy with using the zpool replace command. I have replaced two so far, the second one is just re-silvering now. I currently only have...
Hallo liebes Proxmox Forum. Ich würde gerne meine 2x 1TB SATA SSD in einem "RAID0" als mirror mit meiner 1x 2TB NVME SSD in einem mirror Verbinden. So das ich 2TB SSD Speicher habe. So das entweder die NVME oder eine der SATA SSDs kaputt gehen kann.
Vielen dank schon mal für die Info.
I created a Windows Server 2019 VM with a 120GB C: drive. After the VM was created I added a 60TB drive and formatted it with NTFS. This 60TB drive is a raw file on a ZFS pool. The issue I'm having is that even though in Windows the D: drive is showing 11TB used and 49TB free when I look at the...
Hello,
Urgently need help. I have research work data on it:oops:
Story:
My GPU passthrough has been working well on both 6.2.55 and 65. Because I used update-initramfs -u -k all, both .55 and .65 has GPU frame buffer disable thus no host display out. I was trying to reverse GPU passthrough...
Hello,
I installed zfs-zed via # apt-get install zfs-zed and edited /etc/zfs/zed.d/zed.rc to uncomment ZED_EMAIL_ADDR="root" as described here: https://pve.proxmox.com/wiki/ZFS_on_Linux#_activate_e_mail_notification
zpool status is degraded, but I do not get an E-Mail. Other notifications from...
Hi,
one of my disks reported a SMART-error, so I set out to replace the failed disk - but failed.
Here's what I did, as reported by zpool history, with some remark indented:
2021-04-12.09:48:43 zpool offline rpool sdc2
I removed the failed disk and shut down the system, replacing the drive...
Hi,
I have a ZFS rpool with 4 x 10 Tb drives on raidz1-0:
root@pve:~# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 1 days 04:24:17 with 0 errors on Mon Mar 15 04:48:19 2021
config:
NAME STATE READ WRITE CKSUM
rpool...
We have a working 4 hosts PVE 6.3-3 cluster installed with ZFS. All those machine are enterprise gear on working condition. With those machine we archive, digitize and produce very large video files to different format. These are often intensive processes that require a lot of resources and...
Hello,
we already have a backupserver with a zfs backend in use. Until today, we are doing our pve-cluster full backups via smb on this target, but we want to use the pbs in future to profit from incremental backups and other features.
I installed the pbs on top our existing VM backup server...
Hey,
Trying to share a zpool between 2 installations to play around with HA. I'm not new to proxmox but am new to clustering and shared storage between nodes so any help is appreciated and any comments as to how I could do this cleaner/smarter would be appreciated. The goal is to have the...
I migrated my data from one pool named 'pool1' to another pool named 'pool2' via zfs send ... | zfs recv ... Worked all well. Data were transferred and I destroyed the 'pool1' afterwards and physically removed the disks from the system.
zpool status gives me a normal output:
zpool list...
Hi,
I am quite new to Proxmox and can't find the best solutions for my installation.
I'm running FreeNAS today on an old server with 3x RAIDZ-1. This will I import to Proxmox.
My server is Supermicro X9DRi-F, 2x E5-2620 v2 128GB RAM.
I will import 2x 8x2TB RAIDZ-1 and 1x 4TB RAIDZ-1
I have...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.