Same problem for me on a system with 2x EPYC 7542 processors. Actually--both 6.4 and 7.0 installs were failing in the same way. When I remove my Quadro P400 from the system though at least 6.4 will install. I also tried the fixes you linked and they did not solve my problem.
Yes I see that the chart is just theoretical now and only the bottom portion is directly relevant.
Would there be any downsides to just using 256 / 1M volblocksize to get that extra 1% at 17% loss? At least in my use case I don't think there would be because I am not running a database or...
Actually, no luck, because setting the block size to 40k then recreating the hard disks in the VM results in the error:
zfs error: cannot create 'bpool/vm-100-disk-0': 'volblocksize' must be power of 2 from 512B to 1M
Yes I was just reviewing our conversation and I think I understand now. With mirrored pairs I'd lose 50% to parity so unless performance was necessary (its not) I'd be better served going with RAIDZ2 with 40K. Thanks so much for taking the time to explain everything. Your patience is really...
So I should stick to mirrored pairs so there's no parity or maybe RAIDZ1? Or if I have the option to format the presented volume as something other than 512B in Windows. Or just give up on ZFS and pass the HBA to Windows and let it handle it via Windows Storage, although I'd hate that.
OK thank you. It sounds like for RAIDZ2 with 12 disks that ashift=9 is a reasonable choice I should go with.
I have one more question though, if volblocksize is also part of the equation why does no one seem to recommend reducing it from the default 8K to something smaller? Performance?
Following your linked chart it shows 33%. I am familiar with ZFS as a file server but not familiar with it as VM storage. I am not familiar with volblock size, but I guess its different than the 128K record size ZFS defaults to? My data is backed up so I typically use RAIDZ2 or RAIDZ1 for the...
I created a Windows Server 2019 VM with a 120GB C: drive. After the VM was created I added a 60TB drive and formatted it with NTFS. This 60TB drive is a raw file on a ZFS pool. The issue I'm having is that even though in Windows the D: drive is showing 11TB used and 49TB free when I look at the...
I have a few Linux VMs that get random kernel panics--approximately one panic every week in a group of 20 VMs. Searching for Proxmox raidx_tree_lookup kernel panics results in this thread that discusses VirtIO SCSI. These 20 VMs are configured as SCSI Controller = VirtIO SCSI and hard disk...
At first I had Proxmox working using the interface enp2s0f0 and bridge vmbr0 which is connected to a DHCP private network. I wanted one VM to be reachable from WAN though so I added enp2s0f1 and masqueraded vmbr1 which is connected to a static public IP address.
Everything works but once I...
Hello. FWIW I just started getting a similar problem today. Yesterday, no issues. When I install a Ubuntu 16 container console works fine. I login as root and make a user and add that user to sudo then apt update && apt dist-upgrade && reboot. After rebooting console is blank and never loads. I...
Hello. I got masquerading working with a single IP and a single NIC, but recently realized I can use my PowerEdge 2950's second network port (unless it is unknowingly to me a LOM port ONLY?) with a second IP and masquerade behind that as well. If I could get this working correctly it would solve...
OK ufw was interfering with NAT. If anyone has any recommendations for keeping ufw installed yet letting masqueraded NAT through would be great. Thanks.
I've edited /etc/network/interfaces to better reflect https://pve.proxmox.com/wiki/Network_Model but I'm still having the same issues.
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.0.0.174
netmask 255.255.255.224
gateway 10.0.0.161
iface eth1 inet manual...
Hello. I'm running Proxmox 4.4-1. I followed the guide at https://pve.proxmox.com/wiki/Network_Model for setting up Masqueraded NAT since my Proxmox host has a single public IP (example as 10.0.0.174 below), but my containers can't reach the Internet. They can ping my Proxmox host's gateway...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.