Server 2022 RDS - Hardware raid 10 SAS 10k

gbentz

New Member
Aug 10, 2024
15
0
1
So I have a client that came from a bare metal setup. I got them a new tower budget server, which is what they wanted. 64gb ram, slightly better cpu and 4x drives vs 2x before


D-SAS-25-1.2TB-10K-12GBPSDell SAS 2.5" 1.2TB 10K 12GBPS***RAID10***
KIT-RAID-T430-H730Dell PERC H730 Adapter w/1GB NV Flash Backed Cache

I have them setup above in LVM. Now I am coming from esxi and this is my first setup with proxmox. I followed all the best config setups for the E5-2620 cpu but their box is painfully slow and to me it's pointing to the disk as cpu and ram are barely moving in task manager. Outlook is locking up all the time, some other software that works fine on their svr 2016 setup with half the ram takes 10 seconds to search for records.

So I started digging around and see some users talking about ssd and zfe, ceph etc. Did I maybe set this box up wrong on the disk config side of things inside proxmox? It's a single vmware with only 4 users hitting it per day with 3 in at a time max. Very very low use box
 
Last edited:
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=scsi0;ide0;ide2;net0
cores: 8
cpu: host
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide0: local:iso/virtio-win-0.1.240.iso,media=cdrom,size=612812K
ide2: none,media=cdrom
lock: backup
machine: pc-q35-8.1
memory: 32000
meta: creation-qemu=8.1.5,ctime=1710777406
name: server2022
net0: virtio=BC:24:11:B8:47:AC,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-lvm:vm-100-disk-1,cache=writeback,size=400G
scsihw: virtio-scsi-pci
smbios1: uuid=b7994042-28f4-4f34-8c8f-7f8c8ba44c11
sockets: 2
vga: virtio
vmgenid: 8a454ae5-bc9a-4d86-9acf-1874871b8b24
 
I have made some tweaks as of the last post


agent: 1,fstrim_cloned_disks=1
balloon: 0
bios: ovmf
boot: order=scsi0;ide0;ide2;net0
cores: 4
cpu: host
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide0: local:iso/virtio-win-0.1.240.iso,media=cdrom,size=612812K
ide2: none,media=cdrom
machine: pc-q35-8.1
memory: 32000
meta: creation-qemu=8.1.5,ctime=1710777406
name: server2022
net0: virtio=BC:24:11:B8:47:AC,bridge=vmbr0,firewall=1,queues=8
numa: 1
ostype: win11
scsi0: local-lvm:vm-100-disk-1,cache=writeback,discard=on,iothread=1,size=400G
scsihw: virtio-scsi-pci
smbios1: uuid=b7994042-28f4-4f34-8c8f-7f8c8ba44c11
sockets: 2
vga: virtio
vmgenid: 8a454ae5-bc9a-4d86-9acf-1874871b8b24
 
just this Edit Line to GRUB_CMDLINE_LINUX_DEFAULT="quiet mitigations=off"

what does it mean by upgrade-grub
 
okay got update-grub working


BOOT_IMAGE=/boot/vmlinuz-6.8.12-1-pve root=/dev/mapper/pve-root ro quiet mitigations=off
 
Last edited:
Check BIOS to set Max Power / Performance.
Default is often best effficiency instead best performance.

btw, nowadays, running Windows as RDS VM HDD require more than patience.
 
Last edited:
Check BIOS to set Max Power / Performance.
Default is often best effficiency instead best performance.

btw, nowadays, running Windows as RDS VM HDD require more than patience.

I flipped it from perf per watt to max performance. I still have a few servers at my main office running 10k and 15k sas drives with esxi 6.5 with 4-10 vms. Only difference is they aren't running server 2022 RDS. Most are old 2008 boxes that are internal only and a few linux boxes.
 
I ran crystaldisk baremetal vs their new vm and the speed of the new vm is way way faster for both read and writes. I will see what they say tomorrow with all the tweaks I have made today and yesterday. It seems snappier to me but I don't have access to the same applications they run daily.
 
what do you mean with "internal only"?

old 2008 sp2 and r2 boxes that we don't have open to external traffic for security reasons


I can notice a difference just in outlook with all these changes. It would freeze and be very laggy before. I told the client to login at least 3 users at the same time and report back
 
old 2008 sp2 and r2 boxes that we don't have open to external traffic for security reasons
are they used as RDS ?
of course all RDS never to be public opened.
imo, RDS on 2008r2 (= Win7) running HDD are acceptable, then RDS 2012r2 = Win8.1 begins hungry, then 2016 to 2022 (=Win10) as RDS are heavy, breaking HDD usage. of course context usage needs to be taken into account.
 
Last edited:
are they used as RDS ?
of course all RDS never to be public opened.
imo, RDS on 2008r2 (= Win7) running HDD are acceptable, then RDS 2012r2 = Win8.1 begins angry, then 2016 to 2022 (=Win10) as RDS are heavy, breaking HDD usage. of course context usage needs to be taken into account.

no these very old sql boxes at my main job


the client with the slow 2022, that's their first gripe that their old 2012 server running RDS with hardware older than the new proxmox build is so much faster. Everything I have read is RDS become much slower 2016, worse 2019 and shit now in 2022. Issue is they must stay HIPAA compliant so they need to stay up to date
 
We'll see what they say tomorrow. If they still are not happy I will order them enterprise SSD drives, backup the VM and rebuild the array with the new drives and go from there
 
only as VM running on pre 2016 hardware.

I see!, They have 4 slots let for drives. Guess I could use those for an SSD raid, setup proxmox again there and then keep the SAS array for backups and additional storage
 
I doubt SSD will unlock, try it, but, imo, no invest should be done in an E5-2620.
 
I doubt SSD will unlock, try it, but, imo, no invest should be done in an E5-2620.

lol I know, they were on a budget. I may just move them to my office on a monthly retainer, issue is they print checks and have someone local get them. my office is 3 hours from them so not sure how to handle the paper check generation
 
Update

So we ordered four 1.92TB intel enterprise SSD's

Right now proxmox and its single windows svr 2022 RDS vm is running from raid 10, 10k SAS drives
I'd like to avoid rebuilding proxmox. How would it run if promox stayed on the 10k SAS and I built a raid 10 SSD housing the single VM there?

Or should I rebuild it all running promox and vm from SSD and use the old storage as additional backup space etc
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!