Will a Dell PowerEdge R720 be up to the task?

zecas

Member
Dec 11, 2019
51
5
13
Hi,

I'm planning a build on a new server with proxmox in mind, to replace a very old machine running windows 2003 that needs retirement.

It would be great to ear your opinion on this build.

My main problem is making sure that, going for a refurbished server, which will be an older hardware, it will not be up to it's task.

So the hardware in mind is as follows:

Dell PowerEdge R720 2x Xeon E5-2690 V2 (10 core 3.0Ghz)
128GB RAM (PC3 10600R)
PERC H710 Mini Raid Card (flashed to IT mode: Dell R720 flashing Dell PERC H710 mini into IT mode, H310/H710/H710P/H810 Mini & Full Size IT Crossflashing)
SFF Rack (2.5" 16 Bay Unit)

For disks, I'm planning to use:
- 2x Samsung PM883 240Gb SSD in ZFS Raid 1 for proxmox OS only (refurbished);
- 4x Samsung PM897 480Gb SSD in ZFS Raid 10 for VM data (brand new disks).

My intentions:
- Run a Windows Server 2022 Standard, as a domain controller, file sharing and a simple MySQL database with little access;

In Future (if hardware still permits):
- Additional 2x Windows 2011 Desktops for remote work;
- Additional Windows Server 2022 Standard for everything non-domain controller (remove those roles like file sharing and MySQL from above server);
- Linux VM with docker, to gradually migrate tools to this docker host, like MySQL, put nextcloud, etc.

So this Dell R720 seems like a good machine, but do you see any problem in using them for this job?

Will it be able to deal with those VMs?

This is a small office network with 5 to 7 desktop/laptop machines accessing it, my problem is if the hardware cannot deal with it.

Thank you.
 
For disks, I'm planning to use:
- 2x Samsung PM883 240Gb SSD in ZFS Raid 1 for proxmox OS only (refurbished);
- 4x Samsung PM897 480Gb SSD in ZFS Raid 10 for VM data (brand new disks).
Why do you split it? It will be faster if you use 6 disks in a zfs stripped mirror or just use 4 and it will be cheaper. PVE only uses about 4G of space.

Will it be able to deal with those VMs?
I cannot speak for Windows performance, but the hardware is very capable and should run fast for most of the jobs. Giving a real world "will it be able" is very specific and you have to check for yourself. Just for comparison: I'm running my whole network including multiple docker-setups, VPN-gateways, fileservers and database server (SQL and non-SQL) on an Atom mit 16 GB-RAM and two Enterprise SSDs while using only 12W - but without windows of course, no one needs that. In my experience, the requirements for Windows are much higher (factor 10 and up) with respect to memory and IO, but you have server hardware, which will be able to do what you want. We ran our whole company (15 people) on 3 machines 2 gens older that yours until the beginning of the year with over 100 VMs without any problems. IO is good with SSDs, RAM is suffiencent for most things. You RAM is on the lower end speed-wise (10600R is slow) and the amount is "upgradeable", but in general a good start.

Keep in mind that your ZFS will - on default - use up to 64GB of RAM in your setup to accelerate your IO experience.
 
Hardware seems more than capable for the planned uses, but you are lacking something critical: backups. Plan on having at least a VM with Proxmox Backup Server and a USB drive to store backups of your VMs in it.
 
  • Like
Reactions: Dunuin
Why do you split it? It will be faster if you use 6 disks in a zfs stripped mirror or just use 4 and it will be cheaper. PVE only uses about 4G of space.


I cannot speak for Windows performance, but the hardware is very capable and should run fast for most of the jobs. Giving a real world "will it be able" is very specific and you have to check for yourself. Just for comparison: I'm running my whole network including multiple docker-setups, VPN-gateways, fileservers and database server (SQL and non-SQL) on an Atom mit 16 GB-RAM and two Enterprise SSDs while using only 12W - but without windows of course, no one needs that. In my experience, the requirements for Windows are much higher (factor 10 and up) with respect to memory and IO, but you have server hardware, which will be able to do what you want. We ran our whole company (15 people) on 3 machines 2 gens older that yours until the beginning of the year with over 100 VMs without any problems. IO is good with SSDs, RAM is suffiencent for most things. You RAM is on the lower end speed-wise (10600R is slow) and the amount is "upgradeable", but in general a good start.

Keep in mind that your ZFS will - on default - use up to 64GB of RAM in your setup to accelerate your IO experience.

The reason to split is basically to separate roles. Proxmox will use very little, so the 240Gb is plenty of space, I may put the ISO files in there too.

Then I would have the main pool with more robust SSDs where VMs and LXCs will be stored, and nothing else than that.

Hardware seems more than capable for the planned uses, but you are lacking something critical: backups. Plan on having at least a VM with Proxmox Backup Server and a USB drive to store backups of your VMs in it.

Yes, backups will also be taken into consideration, on an older computer that will have that job (remote storage is also being thought). But your idea of setting PBS on a VM instead of bare metal seems a good idea to explore.



Would you see this upgrades improving performance in a considerable way?
1- 128GB RAM (12800R) - memory speed is a notch higher than the 10600R above
2- 2x Xeon E5-2695 V2 (12 core 2.4Ghz) - it's a lower frequency with 2 more cores and a little less TDP than the E5-2690 V2 above


Thank you.
 
Yes, backups will also be taken into consideration, on an older computer that will have that job (remote storage is also being thought). But your idea of setting PBS on a VM instead of bare metal seems a good idea to explore.
Best would be of cause to run PBS on another machine and not as a VM on your PVE node. Especially later when backup/restore of PVE nodes will be added.
When running PBS as a VM you also should use vzdump to backup your PBS VM to a NAS or another disk because the PBS can't backup itself and without the PBS running you also can't restore any of the other guests.
 
1- 128GB RAM (12800R) - memory speed is a notch higher than the 10600R above
The optimal performance-wise solution would be to use 1866 MHz (PC3-14900).

The reason to split is basically to separate roles. Proxmox will use very little, so the 240Gb is plenty of space, I may put the ISO files in there too.
Then I would have the main pool with more robust SSDs where VMs and LXCs will be stored, and nothing else than that.
You also need robust SSDs for PVE, because it also writes a lot (internal database /etc/pve, rrdgraphs, logs, etc.). Don't underestimate this.
You can of course do whatever you want, but having everything in one pool is always faster with the same number of disks (2 pools on 6 disks will be slower than 1 pool on 6 disks).
 
Best would be of cause to run PBS on another machine and not as a VM on your PVE node. Especially later when backup/restore of PVE nodes will be added.
When running PBS as a VM you also should use vzdump to backup your PBS VM to a NAS or another disk because the PBS can't backup itself and without the PBS running you also can't restore any of the other guests.

Yes, I see what you mean, thanks for the heads-up.
Is it a solution that people implement, that of running PBS as a VM? Or not at a production environment?

The optimal performance-wise solution would be to use 1866 MHz (PC3-14900).


You also need robust SSDs for PVE, because it also writes a lot (internal database /etc/pve, rrdgraphs, logs, etc.). Don't underestimate this.
You can of course do whatever you want, but having everything in one pool is always faster with the same number of disks (2 pools on 6 disks will be slower than 1 pool on 6 disks).

I'll investigate on how much it would cost to upgrade the RAM speed.

This server would be composed of SSD drives only, it is a new server so I opted to go for enterprise drives, as I was pointed out in other threads in these forums.

On another server, it was to run with SSDs for proxmox OS only, and HGST SAS drives for VM data (much slower, I know :() so I wouldn't mix the drives, of course. And as such, no one pointed me that scenario, that running everything on the same pool would be fastest, I'm always learning lol.

Why would it be fastest? And would it be significantly fastest? Will proxmox take considerably more RAM to manage 2 ZFS pools?

I was choosing smaller size PM883 disks (1.3 DWPD for 3 years) thinking that they would be great enterprise entry level disks for the job, I mean they are not as robust as the PM897 used for VM data (3 DWPD for 5 years), but I was expecting them to resist way better than using prosumer level disks like some 860 PRO disks.

I was also thinking that splitting the pool roles, would help on long term management, like upgrading VM pool or OS pool without interfering on one another. Or even if at some point the OS pool get's any kind of problem, in the limit I could reinstall everything without worrying about my VM data.

Let me know your thoughts please.


Thank you.
 
Last edited:
I'll investigate on how much it would cost to upgrade the RAM speed.
It's not a must and will only increase the speed slightly, but if you plant to upgrade the RAM, just go with faster and higher-capacity models. Another problem is that if you populate all ram slots, it will then be less fast, because you often cannot run everything at highest speeds. This is very dependend on the

Why would it be fastest?
Simple speaking: more vdevs is always faster, it scales linearly with the numbers of vdevs (2 stripes mirrored is slower then 3 stripes mirrored), (and fastest with best security it that. Of course stripping every together without redundancy (like RAID0) will and additional factor of 2 faster, but of we're not going to do that because we care about our data)

And would it be significantly fastest?
3x the speed of one drive va. 2x the speed of one drive.

Will proxmox take considerably more RAM to manage 2 ZFS pools?
The amount is restricted with respect to ZFS itself, so the number of pools does not matter with respect to the total amount. But two pools share the maximum amount, so you have less per pool.

I was choosing smaller size PM883 disks (1.3 DWPD for 3 years) thinking that they would be great enterprise entry level disks for the job, I mean they are not as robust as the PM897 used for VM data (3 DWPD for 5 years), but I was expecting them to resist way better than using prosumer level disks like some 860 PRO disks.
Absolutely true. 1 DWPD is a lot and I don't see that in any system I have.

I was also thinking that splitting the pool roles, would help on long term management, like upgrading VM pool or OS pool without interfering on one another. Or even if at some point the OS pool get's any kind of problem, in the limit I could reinstall everything without worrying about my VM data.
Yes, that is THE ONLY reason I know of to split it, but speaking from experience, unless you screw up big time with your OS, you will not have any problem - even if you screw up, most of time a rescue disk will safe the day. Yet keep in mind that in such a setup with split pools, you NEED to backup the /etc/pve directory with all the configuration of your VMs. Just having the data and reverse engineer the config is still a PITA and will e.g. screw with things like windows activation due to changes system uuids.

In system in which I backup just via zfs send/receive, I also have a cronjob to rsync /etc/pve to its own zfs dataset in order to just plug in the data in another system including the configuration.
 
Dedicated PVE system disk can also be nice if you want a backup of the PVE system itself. You could use clonezilla or the proxmox-backup-client to create a blocklevel backup of the whole system disks. That way you won't waste space because you don't need to include your guests virtual disks there if they are on a dedicated pool and backed up individually.

And like LnxBil already said, you should backup your /etc/pve too because virtual disks on another pool without the guests config files will be useless. You can also use the proxmox-backup-client to backup your config folders.
 
Last edited:
Hi,

I'm planning a build on a new server with proxmox in mind, to replace a very old machine running windows 2003 that needs retirement.

It would be great to ear your opinion on this build.

My main problem is making sure that, going for a refurbished server, which will be an older hardware, it will not be up to it's task.

So the hardware in mind is as follows:

Dell PowerEdge R720 2x Xeon E5-2690 V2 (10 core 3.0Ghz)
128GB RAM (PC3 10600R)
PERC H710 Mini Raid Card (flashed to IT mode: Dell R720 flashing Dell PERC H710 mini into IT mode, H310/H710/H710P/H810 Mini & Full Size IT Crossflashing)
SFF Rack (2.5" 16 Bay Unit)

For disks, I'm planning to use:
- 2x Samsung PM883 240Gb SSD in ZFS Raid 1 for proxmox OS only (refurbished);
- 4x Samsung PM897 480Gb SSD in ZFS Raid 10 for VM data (brand new disks).

My intentions:
- Run a Windows Server 2022 Standard, as a domain controller, file sharing and a simple MySQL database with little access;

In Future (if hardware still permits):
- Additional 2x Windows 2011 Desktops for remote work;
- Additional Windows Server 2022 Standard for everything non-domain controller (remove those roles like file sharing and MySQL from above server);
- Linux VM with docker, to gradually migrate tools to this docker host, like MySQL, put nextcloud, etc.

So this Dell R720 seems like a good machine, but do you see any problem in using them for this job?

Will it be able to deal with those VMs?

This is a small office network with 5 to 7 desktop/laptop machines accessing it, my problem is if the hardware cannot deal with it.

Thank you.
Hi Zecas. I have an R720 and an R720xd. If you get one of these I recommend the R720xd because it has more room for drives. I will say this though. I installed AlmaLinux 9 and when you are building the VM, you have to choose another processor, I think I ended up choosing Sandy Bridge because with the default processor the OS would just kernel panic on boot. At the end of the day, it will still run the OS and it doesn't look like it's taxing the system anymore than usual to do so. I like these servers because I was able to get a good amount of RAM and processor threads. Both of my servers have the processors you are looking at too. The R720 server has 192GiB of RAM and the R720xd has 256GiB. For me it was just too expensive to get this setup with newer hardware. Here are some of the things I have noticed that you might be interested in too.
* I was installing RHEL 8 on the hardware (during testing) and my mpt3sas something controller didn't show up. It turns out Red Hat removed a bunch of older drivers from the kernel and the controllers for these servers (the embedded ones) and another one (PCIe) I added weren't in the kernel. I had to load a kernel from elrepo.org which is the same thing just with the older drivers still in it. You could just install a sas3 controller to make this issue go away. I guess mine are sas2?!
* I was a bit worried when I had to change the processor type in the VM but it works and seems to work well. I believe these 2 servers will last long enough, until I am happy to buy some newer . . . err older hardware to replace them.
* I put a couple of 10G NIC cards in them but one is getting RMA'ed so I haven't tested that yet, but I am guessing the point-to-point connection between them will be fast enough to run backups/rsync/share storage or whatever between them.
* I got a smaller-ish four post rack for them and it is awesome because I am okay with a rack sitting in a spare room. If you don't need the R720xd, then I would consider getting a tower model because some of those will hold like 8 drives and are probably more quiet and don't require a rack.
* Just for your SA, I am running proxmox on both now.
* You don't need an adapter to install a 2.5" drive into a 3.5" sled if you are okay with having the drive screwed to just one side of the sled.
* Whatever you get, get IPMI, Dell calls it iDrac and it's 100% worth it. I got an enterprise iDrac license on both and it's awesome. I never have to go to the server unless it's something hardware I need to touch. One note is, there is a setting for iDrac that uses html5 for the remote console and it wasn't set to that by default on my servers. I had to connect a keyboard, monitor and mouse to them to change that setting initially but it has been like magic ever since.
* I noticed you mentioned Windows. I am not a Windows user so I can't help you there, it's been a long time since I've used a Windows OS. I have some VM's of Windows 10 on my laptop or I use to have anyway. I need a Windows domain controller to do some samba 4 testing though so maybe I will have a Windows server in a VM at some point. I try to keep work at work though.
 
Last edited:
It's not a must and will only increase the speed slightly, but if you plant to upgrade the RAM, just go with faster and higher-capacity models. Another problem is that if you populate all ram slots, it will then be less fast, because you often cannot run everything at highest speeds. This is very dependend on the


Simple speaking: more vdevs is always faster, it scales linearly with the numbers of vdevs (2 stripes mirrored is slower then 3 stripes mirrored), (and fastest with best security it that. Of course stripping every together without redundancy (like RAID0) will and additional factor of 2 faster, but of we're not going to do that because we care about our data)


3x the speed of one drive va. 2x the speed of one drive.


The amount is restricted with respect to ZFS itself, so the number of pools does not matter with respect to the total amount. But two pools share the maximum amount, so you have less per pool.


Absolutely true. 1 DWPD is a lot and I don't see that in any system I have.


Yes, that is THE ONLY reason I know of to split it, but speaking from experience, unless you screw up big time with your OS, you will not have any problem - even if you screw up, most of time a rescue disk will safe the day. Yet keep in mind that in such a setup with split pools, you NEED to backup the /etc/pve directory with all the configuration of your VMs. Just having the data and reverse engineer the config is still a PITA and will e.g. screw with things like windows activation due to changes system uuids.

In system in which I backup just via zfs send/receive, I also have a cronjob to rsync /etc/pve to its own zfs dataset in order to just plug in the data in another system including the configuration.

Ok, now I understand what you mean by the vdevs speed. In that case I would have to get 6x PM987 480Gb disks, and arrange them on 3 vdevs with 2 disks in each. Uhm, I have to think about the cost of going that way vs having cheaper 240Gb disks for PVE only.

As for the backup, and for my knowledge, I would only need to backup the /etc/pve folder once in a while, when there are any changes to the VM or proxmox settings, correct? If the proxmox settings are stable (no new VMs or change in settings), then there is no need to make a backup of that, say on a daily basis... obviously it doesn't hurt, if it is an automatic job :).

That tip you gave to rsync /etc/pve to its own zfs dataset is great!

Dedicated PVE system disk can also be nice if you want a backup of the PVE system itself. You could use clonezilla or the proxmox-backup-client to create a blocklevel backup of the whole system disks. That way you won't waste space because you don't need to include your guests virtual disks there if they are on a dedicated pool and backed up individually.

And like LnxBil already said, you should backup your /etc/pve too because virtual disks on another pool without the guests config files will be useless. You can also use the proxmox-backup-client to backup your config folders.

That proxmox-backup-client should come with PBS, or is it already installed on PVE? I mean, it should be run from the PVE machine to make a blocklevel backup of a PVE disk, right? I would then specify the destination repository (the PBS instance) as a parameter, from what I can read from Backup Client Usage documentation.
 
Hi Zecas. I have an R720 and an R720xd. If you get one of these I recommend the R720xd because it has more room for drives. I will say this though. I installed AlmaLinux 9 and when you are building the VM, you have to choose another processor, I think I ended up choosing Sandy Bridge because with the default processor the OS would just kernel panic on boot. At the end of the day, it will still run the OS and it doesn't look like it's taxing the system anymore than usual to do so. I like these servers because I was able to get a good amount of RAM and processor threads. Both of my servers have the processors you are looking at too. The R720 server has 192GiB of RAM and the R720xd has 256GiB. For me it was just too expensive to get this setup with newer hardware. Here are some of the things I have noticed that you might be interested in too.
* I was installing RHEL 8 on the hardware (during testing) and my mpt3sas something controller didn't show up. It turns out Red Hat removed a bunch of older drivers from the kernel and the controllers for these servers (the embedded ones) and another one (PCIe) I added weren't in the kernel. I had to load a kernel from elrepo.org which is the same thing just with the older drivers still in it. You could just install a sas3 controller to make this issue go away. I guess mine are sas2?!
* I was a bit worried when I had to change the processor type in the VM but it works and seems to work well. I believe these 2 servers will last long enough, until I am happy to buy some newer . . . err older hardware to replace them.
* I put a couple of 10G NIC cards in them but one is getting RMA'ed so I haven't tested that yet, but I am guessing the point-to-point connection between them will be fast enough to run backups/rsync/share storage or whatever between them.
* I got a smaller-ish four post rack for them and it is awesome because I am okay with a rack sitting in a spare room. If you don't need the R720xd, then I would consider getting a tower model because some of those will hold like 8 drives and are probably more quiet and don't require a rack.
* Just for your SA, I am running proxmox on both now.
* You don't need an adapter to install a 2.5" drive into a 3.5" sled if you are okay with having the drive screwed to just one side of the sled.
* Whatever you get, get IPMI, Dell calls it iDrac and it's 100% worth it. I got an enterprise iDrac license on both and it's awesome. I never have to go to the server unless it's something hardware I need to touch. One note is, there is a setting for iDrac that uses html5 for the remote console and it wasn't set to that by default on my servers. I had to connect a keyboard, monitor and mouse to them to change that setting initially but it has been like magic ever since.
* I noticed you mentioned Windows. I am not a Windows user so I can't help you there, it's been a long time since I've used a Windows OS. I have some VM's of Windows 10 on my laptop or I use to have anyway. I need a Windows domain controller to do some samba 4 testing though so maybe I will have a Windows server in a VM at some point. I try to keep work at work though.

Thanks for sharing your experience.

I also took a look into R720xd, but the price would be higher, and I don't really see the need for that many disks. The R720 I was looking for already takes 16 disks, and I'm planning to use 6 for a start. Maybe in the future I use 4 more disks, but after that I believe I would also be start replacing them, so I believe it has enough for my needs.

The machine also has iDrac (don't know version), but I'll have no enterprise license. Still, I'll be able to remotely control it correct? If I'm able to enable html5 would be awesome, as I have a supermicro that runs a java client and, even though it works, it's just a bit cumbersome and looks like old fashioned by today's standards.

I'll have a dedicated room to install it in a rack cabinet, so for this server I don't think noise will be a problem, thankfully. I have no experience with dell servers, but if it goes similarly to supermicro ones, it's just too loud to have it near a work environment.

One thing I'll still have to double check is if the PERC H710 Mini Raid Card (which I'll flash to IT mode) and the disk cages are capable of handling SATA3 disks, because I'm planning to install SATA3 SSD drives in it :).

Thanks
 
That proxmox-backup-client should come with PBS, or is it already installed on PVE? I mean, it should be run from the PVE machine to make a blocklevel backup of a PVE disk, right? I would then specify the destination repository (the PBS instance) as a parameter, from what I can read from Backup Client Usage documentation.
Jup comes with PVE preinstalled. But there is also a client only repository in case you want to install it to a debian USB stick or something like that. For a blocklevel backup of PVE you don't want PVE to be running and boot from something else.
 
Thanks for sharing your experience.

I also took a look into R720xd, but the price would be higher, and I don't really see the need for that many disks. The R720 I was looking for already takes 16 disks, and I'm planning to use 6 for a start. Maybe in the future I use 4 more disks, but after that I believe I would also be start replacing them, so I believe it has enough for my needs.

The machine also has iDrac (don't know version), but I'll have no enterprise license. Still, I'll be able to remotely control it correct? If I'm able to enable html5 would be awesome, as I have a supermicro that runs a java client and, even though it works, it's just a bit cumbersome and looks like old fashioned by today's standards.

I'll have a dedicated room to install it in a rack cabinet, so for this server I don't think noise will be a problem, thankfully. I have no experience with dell servers, but if it goes similarly to supermicro ones, it's just too loud to have it near a work environment.

One thing I'll still have to double check is if the PERC H710 Mini Raid Card (which I'll flash to IT mode) and the disk cages are capable of handling SATA3 disks, because I'm planning to install SATA3 SSD drives in it :).

Thanks
Ahhh, you are getting the 2.5" drive version. I have the LFF version because I put 12TiB drives in it.

So For SATA3 you have to get the dedicated backplane. So if you have one cable servicing 6 drives (I think) then it will support SATA3. If your backplane has a bunch of drives and two cabes then it won't. This youtube video goes through it: https://youtu.be/vqB8kYfhtFc He explains it better than me.

iDrac: I only do enterprise but I think the difference is being able to connect remote media and having a dedicated iDrac port. The remote media thing is like mapping an ISO from your laptop to the virtual CD/DVD drive of the server so you can install from there. I've used the feature and it's slow but it works. The dedicated port is nice because you can put your iDrac connection on a different firewall port or some security measure like that. It isn't the end of the world especially if you are wanting to save money.
 
Jup comes with PVE preinstalled. But there is also a client only repository in case you want to install it to a debian USB stick or something like that. For a blocklevel backup of PVE you don't want PVE to be running and boot from something else.

Oh I see, it makes sense. Then again, having to boot to make a clone of a PVE disk, it may be simpler to do it with clonezilla.

After having the system up and running for some time, and having a stable instalation of VMs, maybe doing a clone from time to time will be sufficient (at least from the way I think about it).
 
Oh I see, it makes sense. Then again, having to boot to make a clone of a PVE disk, it may be simpler to do it with clonezilla.
Jup, thats what I did before. But clonezilla can't store your blocklevel backup deduplicated on your PBS. You basically store the second disk of the mirror for free because its the same as the first one. And for each following backups you only need to store the parts that changed.
 
Ahhh, you are getting the 2.5" drive version. I have the LFF version because I put 12TiB drives in it.

So For SATA3 you have to get the dedicated backplane. So if you have one cable servicing 6 drives (I think) then it will support SATA3. If your backplane has a bunch of drives and two cabes then it won't. This youtube video goes through it: https://youtu.be/vqB8kYfhtFc He explains it better than me.

iDrac: I only do enterprise but I think the difference is being able to connect remote media and having a dedicated iDrac port. The remote media thing is like mapping an ISO from your laptop to the virtual CD/DVD drive of the server so you can install from there. I've used the feature and it's slow but it works. The dedicated port is nice because you can put your iDrac connection on a different firewall port or some security measure like that. It isn't the end of the world especially if you are wanting to save money.

Yes, its a SFF (small form factor), so I'll have 2 cages of 8x 2.5" caddies, a total of 16 disks.

On my supermicro server, it has LFF (large form factor) 3.5 disk caddies, and I only installed 2.5" disks. So I created myself some support so they would fit more properly on the drive caddies, just to help slidding them right into the backplane connectors.

So For SATA3 you have to get the dedicated backplane. So if you have one cable servicing 6 drives (I think) then it will support SATA3. If your backplane has a bunch of drives and two cabes then it won't. This youtube video goes through it: https://youtu.be/vqB8kYfhtFc He explains it better than me.

Thank you for the video link. After seeing it, I have a question: you wrote SATA3 but you meant SAS3 instead? I believe the video talks about the possibility to upgrade from SAS2 to SAS3.

The server will have 2 cages of 8 disks, but at the moment I don't know if
(a). it has 2 cables (which will mean an extender is installed for the 16 disks), or
(b). if it has 4 cables installed (2 cables for each 8 disk cage, thus with no extender).
In this last scenario, one could upgrade the system with a SAS3 capable controller, and get SAS3 drives installed.

But still, whichever scenario the server presents, I believe I will be able to connect SATA3 disks and take advantage of that bus speed.

iDrac: I only do enterprise but I think the difference is being able to connect remote media and having a dedicated iDrac port. The remote media thing is like mapping an ISO from your laptop to the virtual CD/DVD drive of the server so you can install from there. I've used the feature and it's slow but it works. The dedicated port is nice because you can put your iDrac connection on a different firewall port or some security measure like that. It isn't the end of the world especially if you are wanting to save money.

The supermicro java client also provides such a feature of remotely mounting an ISO file (for instance), that greatly helps on bare metal OS installation. If I loose that feature by not going into iDrac enterprise is a pity, but guess I'll have to deal with it if that's the case. Going for enterprise requires just a license, or the iDrac hardware is different?

Thanks
 
Yes, its a SFF (small form factor), so I'll have 2 cages of 8x 2.5" caddies, a total of 16 disks.

On my supermicro server, it has LFF (large form factor) 3.5 disk caddies, and I only installed 2.5" disks. So I created myself some support so they would fit more properly on the drive caddies, just to help slidding them right into the backplane connectors.



Thank you for the video link. After seeing it, I have a question: you wrote SATA3 but you meant SAS3 instead? I believe the video talks about the possibility to upgrade from SAS2 to SAS3.

The server will have 2 cages of 8 disks, but at the moment I don't know if
(a). it has 2 cables (which will mean an extender is installed for the 16 disks), or
(b). if it has 4 cables installed (2 cables for each 8 disk cage, thus with no extender).
In this last scenario, one could upgrade the system with a SAS3 capable controller, and get SAS3 drives installed.

But still, whichever scenario the server presents, I believe I will be able to connect SATA3 disks and take advantage of that bus speed.



The supermicro java client also provides such a feature of remotely mounting an ISO file (for instance), that greatly helps on bare metal OS installation. If I loose that feature by not going into iDrac enterprise is a pity, but guess I'll have to deal with it if that's the case. Going for enterprise requires just a license, or the iDrac hardware is different?

Thanks
The controller is SAS but it will work with SATA drives. You can put either SAS or SATA drives on a SAS controller, but not the other way around. You can't put a SAS drive on a SATA controller. Is that what you are asking?

The guy in the video was just saying he had a PCI lane for each drive which makes it capable of handling faster drives. All of that is getting a bit out of my lane because I am sure the servers I have, have expanders. I don't need that kind of speed, I just run VM's and Containers to run services I use. I will say that using a SAS controller with SATA drives may require a different cable especially if you don't have a backplane doing the converting of the interfaces. These Dell servers will take either kind of drive for instance.

If he was upgrading to SAS3 then he was ensuring the lanes for data are available for the extra speed. Either SAS2 or SAS3 will work with SATA drives, rust or ssd.
 
The controller is SAS but it will work with SATA drives. You can put either SAS or SATA drives on a SAS controller, but not the other way around. You can't put a SAS drive on a SATA controller. Is that what you are asking?

The guy in the video was just saying he had a PCI lane for each drive which makes it capable of handling faster drives. All of that is getting a bit out of my lane because I am sure the servers I have, have expanders. I don't need that kind of speed, I just run VM's and Containers to run services I use. I will say that using a SAS controller with SATA drives may require a different cable especially if you don't have a backplane doing the converting of the interfaces. These Dell servers will take either kind of drive for instance.

If he was upgrading to SAS3 then he was ensuring the lanes for data are available for the extra speed. Either SAS2 or SAS3 will work with SATA drives, rust or ssd.

Yes, the important thing is to be able to install SATA3 drives (rust or ssd) without any issues. I know the connectors are compatible, I was just afraid the cage would not be, for some reason, or even the PERC H710 Mini controller.

As for SAS3, at the moment I see no reason to worry about that speed. By the time it makes a difference for me, I will most probably be searching for a more powerful machine, anyway.

Thanks.
 
1- 128GB RAM (12800R) - memory speed is a notch higher than the 10600R above

The optimal performance-wise solution would be to use 1866 MHz (PC3-14900).

Sorry to revive this thread, but I'm looking into a possible upgrade, and going for PC3L 12800R modules.

I'm not familiar with those types, maybe someone can help me understand:
  • The PC3L means it is an LRDIMM and thus it will not have the rank rule when adding more memory modules?
  • Or it simply means it is a 1.35v module and still I have to check if it's a single/dual/quad rank module?

(some reference about dell r720 and memory modules)


Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!