How will that affect VMs that use more CPUs than a single NUMA node has?
As for 30 Gb/s being fast. Sure, to some. But I'm trying to take advantage of some fast NVMe pools I have on another VM that are capable of 10GB/s.
I'm trying to increase the networking performance between 2 Linux VM's on the same Proxmox host and bridge. I've tried both a Linux bridge and OVS bridge. I've tried increasing MTU to 900o and multiqueue to 4. I can't seem to get above 25-30Gbps between the VMs. I'm running these VM's on an...
Yes.
Also, the Opitimize-Volume command works on C but most of my data (12TB+) is on D. And the command fails on D with...
> Optimize-Volume -DriveLetter D -ReTrim -Verbose
VERBOSE: Invoking retrim on Data (D:)...
VERBOSE: Performing pass 1:
VERBOSE: Retrim: 0% complete...
Optimize-Volume...
I call it a backup because it's a File Backup server that gets file backups replicated to it using the software Vembu. There is a main file backup server running Vembu and then this one that the data gets replicated to.
I'm well aware of the potential performance pitfalls but I don't have a...
Is there any risk of data loss with this method? My disk is attached as scsi0 with a VirtIO SCSI single controller. I'd have to shut the VM down and tick off discard and SSD emulation and then boot the VM back into Windows to run these commands.
Thanks, that's exactly what I did last night and I have the import running now. Right now it's 60% done with 8.3 TiB transferred of 14.1TiB.
Current state of the datastore is...
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
chs-vm01-datastore01...
I appreciate your reply but I'm not going to get into different hardware possibilities because, honestly it's just moot at this time. The situation (at work, this is not a home server) is I have the following hardware that I need to migrate a VMware 14TB vdisk to in the next few weeks. I did...
It's not an option unfortunately. I would have done that in a heartbeat but I'm forced to swap hosts as well and the newer hosts are coming from being VMware VSAN nodes that didn't come with or need hardware raid controllers. So...I'm here now. I don't have any preference as to what FS I use...
I just don't really have a choice. Being forced off VMware due to licensing increases and need move over to Proxmox. The move also means moving my 9 disks from a hardware raid controller to passthrough so it's either ZFS RAIDz1 or mdadm which isn't officially supported.
@Dunuin Can you help me understand what's the best blocksize to use to maximize space here? This doesn't need to be a high performance pool. It's just storing file backups that never run at greater than 1Gbps (over the network).
I'm trying to store a 14TB visk (VMware conversion) on a 9 x 2.4TB RAIDz1 pool. However, anytime I try to import the disk, it gets to about 90% and errors out because it says I'm out of space. I've been reading that this is a ZFS padding issue.
So my question is, how can I configure this pool...
Looks like Step 6 on from this link worked. Is it normal to have to do this is in 2024 for a Linux VM?
https://forum.proxmox.com/threads/how-to-boot-grubx64-efi-after-import-from-hyper-v.55429/post-255178
I have an NVMe drive passed through to a Linux VM in Proxmox. The Debian server installer detects the disk fine, partitions it, and installs the OS. When the install is done and I try to boot the VM it just goes to the UEFI shell. I've verified the boot order is correct (hostpci1) in the Boot...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.