Ran below
root@pmox1:~# memtester 1024 5
Here is the results
root@pmox1:~# memtester 1024 5
memtester version 4.3.0 (64-bit)
Copyright (C) 2001-2012 Charles Cazabon.
Licensed under the GNU General Public License version 2 (only).
pagesize is 4096
pagesizemask is 0xfffffffffffff000
want...
Ok got another crash today with unable to handle kernel paging on the same server. Below is the stack trace
Mar 26 19:53:22 pmox2 pvestatd[1543]: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,lv_name,lv_size,lv_attr,po...
Hi @Chris thank you for your response.
I had issues with the previous kernel but don't have the exact version of the kernel. Not sure if the issue was identical either.
No, I have not tried to reproduce this issue on the previous kernels. I can give it a shot if you could tell me which kernel...
With latest Proxmox 5.3-1 I am running into a Bug that halts KVMs. It is running on Intel NUC 8. Below is the syslog and pve info.
Mar 26 00:21:09 pmox2 kernel: [630425.268008] BUG: unable to handle kernel paging request at ffffffffc17edb60
Mar 26 00:21:09 pmox2 kernel: [630425.268452] IP...
Simple LVM addition through the gui is simple and I had done it in the past. It works just fine. However, adding lvm via gui does not give you the option of creating the software raid. As mentioned, software raid is a must.
Lets try to find the answer a different way. The goal is to create LVM...
Unfortunately hardware raid is not an option since we use PCI ssds. It has got to be a software raid. High Performance is a must. Gave a shot to zraid but it was extremely slow. LVM raid seems a lot faster. However, ideally what I would like to do is to create the LVM raid and then add this to...
Yes gave a shot to ZFS. Performance decreased drastically. I know that I am missing something very trivial with the Proxmox showing %100 allocated when it is not. Just I am not sure if it is a Proxmox bug or my implementation below is not correct.
pvcreate /dev/nvme0n1 /dev/nvme1n1...
Hi @dcsapak thank you for getting back.
I have 4x 1T NVMe ssd. I have to create Raid10 for redundancy. Another goal is to get the best performance from those PCI ssds. How else can I accomplish this without above steps?
Goal:
Create 4 drive RAID10 over LVM and add that storage to Proxmox.
Problem:
When I add raid10 lvm storage, Proxmox gui shows %100 full on the storage && shows ~4T of space instead of respecting raid10 mirrors and show ~2T of space. Creating a mount point and mounting to a folder on root...
@guletz I did not expect Raidz to perform that poorly comparing to single drive is what I meant. Naturally, I was expecting some slow down but not to this extend. If you look at the results, you can see that Zraid1 on NVMe pci ssds performed almost the same as mirrored non-performance ssds over...
Another question. Above solution meaning creating the LVM RAID10 and mounting to a folder in filesystem then using that inside Proxmox does seem to work. However when I add LVM directly on proxmox GUI, it shows 100% of the data taken. Here are the steps for both scenarios.
Does NOT work...
Ok below are the steps taken to create LVM RAID10 on 4x NVMe SSD drives. Also attaching benchmarking results.
// Create Physical Volume Drives
pvcreate /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
// Create Volume Group
vgcreate my_vol_grp /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1...
Update on another round of experiment.
Destroyed the Raidz-1, not beneficial from performance perspective. Created RAID10 with below commands,
zpool create -f -o ashift=12 nvmeraid10pool mirror /dev/nvme0n1 /dev/nvme1n1 mirror /dev/nvme2n1 /dev/nvme3n1
Below is the same benchmark results. It...
https://pve.proxmox.com/wiki/Software_RAID
It looks like as a general approach software raid 10 is not a suggested way. I don't mind doing and testing it, however if it is not suggested for production environments by Proxmox, what is the next best approach then?
This is actually funny, about the same time I have created a very similar topic. It is the second one on the list in the Forum (for now). We also have a production level setup requirements. We are using NVMe High performance SSDs and looking into the best performing FS while having data...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.