I've a 12 slot Supermicro chassis and have 12 x 1TB Samsung PRO SSD drives.
Configuring them for ZFS pool, which one is better to go?
2 VDEVs striped, each has 6 drives with raidz2, usable space 8 drives
1 VDEV with 11 drives with raidz3, usable space 8 drives
Also I have an Intel P3700 PCI...
Could any body please write down how to do that with pve-zsync command? I couldn't figure out with the man pages or wiki info about zsync.
My vm is numbered 766 and below are the pool names with disks ;
pool-0-1-35 1.17T 9.37T 96K /pool-0-1-35
pool-0-1-35/vmvols...
I have two zpools and one of my vm's resides on both. It does have the first disk in local zfs pool and other disk on the zfs pool2. I wonder how pve-zsync acts since I can not express two destination pools on the command line. Is there a way to do that which is not documented or it is simply...
May be there's also problem about snapshoted ZFS volumes. I had a pool with the size of the 14TB and also the zpool list shows as below.
I had a snapshot created by pve-zsync and that shows it as plus to my usage and it increases my usage in graph view.
You should really mention that prerequisite on the wiki page https://pve.proxmox.com/wiki/Storage_Replication otherwise it takes time to understand and run the replication.
Yes, It worked that way. After the reboot, permission becomes restored and OSD is not booting and Ceph is not starting. I think that's a UDEV issue that I've seen on your older posts. But I can't fix that either.
Thank you for the reply. This is exactly what I did for my installation. Before the execution of the command ;
#ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0 root default
-2 0 host prox-ceph-5-1-51
#ceph-disk zap /dev/sda
The operation...
I've an Intel P3700DC NVMe drive which I want to use it as journal for 3 SATA OSDs. I partitioned the device but can not use it them as block devices so I want to use it as file journals.
I couldn't find a way to go on Proxmox interface and ceph-disk prepares the OSD but I can not see it in the...
May be someone have benefit from this knowledge. We have two systems which are exactly the same and having NVMe disk issue. After booting and seeing the NVMe drive in dmesg, in /dev/ and in fdisk -l, device disconnects after 60secs after being probed in boot sequence. Applying the above kernel...
I use this HBA in which support IT mode for ZFS with Supermicro and Proxmox - https://www.supermicro.com.tw/products/accessories/addon/AOC-S3008L-L8i.cfm
I really need that space and shouldn't exceed $200 per TB. With full ssd configuration I can not fit in. So I need to optimize with 8 slots and those devices.
May be this thread will help you ;
https://forum.proxmox.com/threads/proxmox-multiseat-gpu-passthrough-works-great-but-only-1-vm-at-a-time-allowed.28339/
We've decided to use the last empty slot with another Intel P3700 NVMe drive.
We'll have disks;
2 x SATA DOM with zfs raid 1 for Proxmox attached on board
6 x 4 TB SAS 7.2K drives with 3 mirror vdevs (12TB usable)
2 x 400GB NVMe Intel P3700 partitioned. 40GB raid-1 ZIL and 550GB (275GB+275GB...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.