Cpu cores and threads.

Thank you for your response
What exactly is slow and how much time does each individual step take?


To boot the vm? I think this is absolutely reasonable.

I do boot my VMs partially sequential, partiall parallel. In the end it takes some time to get them all up. I have never tried to do everything in parallel. it also does not make sense because of dependencies...

Once we have some information about your exact times I think we can judge better.
The processes of starting the vm 's are fluid.

when i look at the console i see the process go forward and once in a while the process stops at a line for a few seconds then continues

They justs take a lot of time in my opinion (in total less than 2 minutes for both VMs).

I was just wondering if by adding more ram and or more cores the time would be reduced would improve.


Thank you once again
 
Slo
Hello all

I have my proxmox 8.0.4 up and running fine. thank you all

I installed 128gb ram on the server thinking that this would be enough for the vms to "swim" freely.

Just to tweak and nitpick

My issue
When i need to reboot the proxmox server (installed on a dell r720 with 2 X 5-2660 and 128gb ram) the boot process and the start of the vms seams slow

I allocated 4 cores to truenas core and 80 gb ram.

It still takes 0ver a minute to start and i see that the truenas vm is using 99% of the available ram and at some points 90 % of the cores.

Since I only have 24 TB of spinning drives and the boot drive is an 480 mb ssd I would imagine that this process would be faster.

As a precaution and practice (maybe not necessary) When either proxmox or any of the VMs (or their components) have changes, upgrades and / or updates I tend to reboot the vm and since I am there reboot proxmox.

To speed this up (or am i being just impatient) what you i upgrade, the cpu's the ram or both, or should i just reset my expectation and let the vms play out as they are doing now?

Best practice
Do i need to start the vm sequentially or automatically? does it make a difference overall?

I do not want to start tweaking the system on my own because i'm certain i will foobar it

Thank you for your time, patience and help in this matter

VM boot speed is primarily driven by the disk performance backing the virtual boot disk. If your VM boot disks are all backed by those spinning drives, boot speeds of the VM's will be slow. I always back all VM boot disks with a ceph pool that has an "ssd only device class" rule. VM's boot pretty quick on these.

You mention having a single 480 "mb" (assume you mean GB?) SSD for this purpose but have you checked to ensure that the VM boot disks are in fact on that drive, and are you sure that's how you want it even if you do? A single disk is a point of failure...
 
Slo


VM boot speed is primarily driven by the disk performance backing the virtual boot disk. If your VM boot disks are all backed by those spinning drives, boot speeds of the VM's will be slow. I always back all VM boot disks with a ceph pool that has an "ssd only device class" rule. VM's boot pretty quick on these.

You mention having a single 480 "mb" (assume you mean GB?) SSD for this purpose but have you checked to ensure that the VM boot disks are in fact on that drive, and are you sure that's how you want it even if you do? A single disk is a point of failure...
Hi
Thank you for correcting me yes my ssd is 480GB.

My VM are running on the ssd. Only data is on the HDD.

I am considering my 2nd nas (a supermicro X10 with 32 GB ram 500gb Boot drive. I will add additional ram to this server and will act a a redundant server to my dell.

I truly do not know what a ceph pool is I will need to read up on this

Since you are not mentioning the benefit of additional ram on my dell, should i interpret this as being the case?
 
Hi
Thank you for correcting me yes my ssd is 480GB.

My VM are running on the ssd. Only data is on the HDD.

I am considering my 2nd nas (a supermicro X10 with 32 GB ram 500gb Boot drive. I will add additional ram to this server and will act a a redundant server to my dell.

I truly do not know what a ceph pool is I will need to read up on this

Since you are not mentioning the benefit of additional ram on my dell, should i interpret this as being the case?
I would double check that your VM boot disks are indeed on that SSD.

Ceph is software defined storage across a cluster of servers. I think it requires 3 servers minimum for testing / proof of concept, 4 minimum for low impact stuff like homelabs, 5+ for production clusters hosting commercial workloads. In my experience it's the best way to achieve remarkably high data availability and integrity. Drive failures and even entire node failures are not an issue with ceph.

The right amount of RAM depends on workloads... More RAM is only helpful if the system will make use of it, otherwise it will just make boot speed of the underlying server take longer. I have 128GB per node on my home cluster (X4) = 512GB total, and this is a good number for homelab experiments involving many virtual machines, but probably overkill for most home servers. At work we have 3TB of RAM on a 6 node cluster. We have lots of VM's and like to have enough RAM that all workloads can be run from 4 servers comfortably so that we can handle being down a node and still run maintenance (reboots) of remaining nodes without an issue.
 
Last edited:
I would double check that your VM boot disks are indeed on that SSD.

Ceph is software defined storage across a cluster of servers. I think it requires 3 servers minimum for testing / proof of concept, 4 minimum for low impact stuff like homelabs, 5+ for production clusters hosting commercial workloads. In my experience it's the best way to achieve remarkably high data availability and integrity. Drive failures and even entire node failures are not an issue with ceph.

The right amount of RAM depends on workloads... More RAM is only helpful if the system will make use of it, otherwise it will just make boot speed of the underlying server take longer. I have 128GB per node on my home cluster (X4) = 512GB total, and this is a good number for homelab experiments involving many virtual machines, but probably overkill for most home servers. At work we have 3TB of RAM on a 6 node cluster. We have lots of VM's and like to have enough RAM that all workloads can be run from 4 servers comfortably so that we can handle being down a node and still run maintenance (reboots) of remaining nodes without an issue.
thank you for the teachings.

all the software for all my vms are on ssd and run from ssd

Be well
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!