I can only refer my own experience with the hardware I use / own.
In general for VMs and database you need high fsync numbers, that means that the device is able to fulfill the "I want to be sure data is wrote and safe" (sync) write request from the program or OS.
This is a much slower process...
Ok, seems that "hardware" read speed is ok for the 4TB WD.
I don't understand the VM100 and 301 disks configured:
a) vm 100 should have all disks named
datastore_zfs:vm-100-disk-0
datastore_zfs:vm-100-disk-1
datastore_zfs:vm-100-disk-2
but instead I see
datastore_zfs:vm-100-disk-0...
Just a question, when you talk abot "ZFS replication" you mean ZFS storage replication and you know that is not real time, so when you switch off the node that is running the VM, the VM "migrates" and you loose "n" seconds of data written, correct?
I ask because I would love to have real time...
And...? No message in the console? I don't mean in the logs, just after you entered that command, did you had any message telling why VM could not start?
Here are my logs when I successfully start a VM (VM n. 999), and are almost identical to yours, as far as I can see
Jan 8 22:21:15 proxmm01...
Not related to read speed, but your SSD is not very performant, my DC500M is
FSYNCS/SECOND: 10112.94
If I'm correct /dev/sdc is the SSD, seems also really slow in read... it should almost saturate SATA 3 speed at 500MB/s
My Workstation Crucial MX500 (consumer grade SSD) has
# hdparm -tT...
Yes, bad idea, they don't have power loss protection, so sync writes can't be considered done until really done, and that's slow.
I would buy Kingston DC500M (not 'R') 960GB for 200$ each (I know, your is $100, but trust me, it worths the price difference).
I.e. the first test on a DC500M where...
Yes, you can read
https://www.ixsystems.com/blog/library/wd-red-smr-drive-compatibility-with-zfs/
and check WD model here (old ones, with 64MB cache, are generally ok)
https://nascompares.com/answer/how-to-tell-a-difference-between-dm-smr-and-non-smr-cmr-drives-hdd-compare/
Was it working before and now is not?
start it from shell
qm start VMID
read carefully the errors and if is not clear, report back.
AFAIR you can also do it from gui, start the vm, double click on the start task in the lower part of the GUI, and read the message
If is just created and never...
Never tried, but in principle you could add that disk to the first VM, format that disk with a file system that supports multiple access/mount ("clustered file system", maybe https://en.wikipedia.org/wiki/GFS2 but I've no idea, never used one), manually edit the config of the second VM and add...
I've no idea, so just some random shots in the dark:
post the output of (learn how to copy/paste from ssh connection to Proxmox, so you will avoid image and just use the "code" tags)
pveversion -v info
hdparm -tT /dev/sda
hdparm -tT /dev/sde
pveperf
free -m
qm config 100
qm config 301
In what...
Ah, ok, I've no experience on LXC containers, so maybe is just how pve 6.x vs 5.x shows their "disks". Of course qm config command fails, I was convinced that your VM were KVM not LXC, sorry.
So we are back to the starting point...
Could be some LXC that is doing much more I/O? Can you produce a...
if you mean
[ 0.013208] check: Scanning 1 areas for low memory corruption
I really thing is not an error, just a notification that is scanning (checking) IF there is low memory corruption.
How on earth did you migrate the VMs? Backup and restore? In the GUI the VM are listed as VM or Containers?
You can't transform a Vm in a container with simply backup/restore, so I've no idea what's going on. The above image is taken from where? I've not found in GUI a view like that (maybe my...
Old server mount has the line
/dev/mapper/pve-data on /var/lib/vz type ext4 (rw,relatime,stripe=256,data=ordered)
New server has not. Old server VM storage was based upon LVM, new server... I don't understand!!! /dev/loop55 ? Seems that you have raw files in your ext4 local partition that are...
I add that I had problems installing Kubuntu from USB2 ports in my new workstation, while went fine on USB3. I thought USB2 was "more compatible" but was not the case...
If you check you df -h, you see that you don't have the same kind of storage configured!!!
Old server has
/dev/mapper/pve-data 3,6T 1,4T 2,1T 40% /var/lib/vz
that is LVM type storage (so vm disks are upon a "block device", directly without intermediate file system on Proxmox side)
new...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.