VM in RamDisk

Rip

New Member
Sep 27, 2021
5
0
1
43
Hello,

I just joined the forums, and I am a noob, please be gentle ;)

I’ve been searching but can’t find a tutorial on how to create a ramdisk on the host and have proxmox use it for guest installs. I’ve found references saying this can be done. But I also am trying to learn more, about its limitations and how to go about it

Can someone point me to some references?

What happens on reboot of the vm or host? Because I assume that the VM storage is ram that the VM is lost, are there and scripts to backup the vm and then restore it on host/vm reboot?

Thank you
Rip
 
As far as I know PVE has no support for VMs on RAM disks. But PVE is based on Debian, so if you find a way to use RAM disks on Debian and use that as a directory storage that may work. But wouldn't it be more useful to use a persistent VM storage and then create RAM disks inside the VM?
 
I was thinking for a vm’s guest os.

For example: pfsense, I would create a ramdisk in proxmox then create a pfsense vm to install on the ramdisk, backup the vm, and restore from backups on reboot.

Basically if you have enough ram one could have a completely ram based vm. The idea was thought up to avoid ssd wear in vms that are fairly write intensive and wouldn’t be rebooted often.
 
If you just want to reduce SSD wear, why not get some decent enterpise SSDs? My Intel S3700/S3710 for example got 30 times the BTW of a consumer drive. So thats 21125 TB TBW per 1TB of storage, which you get second hand for around 150€. I would bet that is cheaper than buying RAM.
 
Thank you

While the idea came about while thinking about ssds…I think the approach is a sound solution, now, I’m more or more curious to see if it’s been done or can be done. It seems fairly straight forward.

Where can a learn about scripting this?

Are there commands that can backup vms? Commands to restore and start vms? And how can you kick off a script on boot? Or reboot?

Thanks
 
Last edited:
While the idea came about while thinking about ssds…I think the approach is a sound solution, now, I’m more or more curious to see if it’s been done or can be done. It seems fairly straight forward.
But thats something I really wouldn't rely on. What for example happens on an power outage if that virtual disk is deleted? Keep in mind that VM config files and virtual disks are not stored in the same folder. If you delete (or loose) your virtual disk you still got that config file so proxmox thinks that VM exists. If you then manually try to delete that VM using GUI it will fail because a VM with missing virtual disks can only be deleted using CLI.
So you really need alot of scripting.
Where can a learn about scripting this?
I would start with bash tutorials.
Are there commands that can backup vms? Commands to restore and start vms?
Look for the qm command for VMs and pct command for LXCs.
And how can you kick off a script on boot? Or reboot?
cron with "@reboot" at the start or better a systemd script for more control.
 
I was thinking for a vm’s guest os.

For example: pfsense, I would create a ramdisk in proxmox then create a pfsense vm to install on the ramdisk, backup the vm, and restore from backups on reboot.

Basically if you have enough ram one could have a completely ram based vm. The idea was thought up to avoid ssd wear in vms that are fairly write intensive and wouldn’t be rebooted often.
This is a great idea, thinking out the box I like it, better than the solve everything with expensive hardware.

It should work fine, setup a ram disk, then add it as directory based storage in Proxmox, enable it for images and containers.

Then setup the VM and do the backups as you said, make sure you do a backup once its configured and online, so if you do get unexpected power cuts before new backups are made, then all you have lost is some logs.

Just bear in mind though pfsense itself supports using a ramdisk for logs (and rrd graphs as well I think), so in this mode writes should be very low.

On my local proxmox box I reduced the writes to a fraction of default level merely by redirecting pveproxy logs to /dev/null, buffering nginx logs to only write once a minute and disabling clustering services. On my two datacentre boxes which are run on spindles (and given I am not 100% of the consequences of disabling clustering services) I just redirected pveproxy and did the nginx buffering.
 
Last edited:
add 'combined buffer=128k flush=1m' to end of access_log line in nginx.conf.

If you using nginx frontend on proxmox, pveproxy access logs merely a duplicate just logging localhost proxy access, so if you also route that to dev/null you lose nothing.
 
Last edited:
  • Like
Reactions: Dunuin
How did you do that? That sounds useful. Fewer but bigger Log writes should be good for the write amplification if you don't need logs in real time.

To give you an idea of the impact in the first few weeks I put proxmox on my new ssd's locally it had 4 erase cycles, just over 1 a week.

Since I made the changes I mentioned, its not moved up and is still on 4, in addition the i/o delay is about a 3rd of what it was.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!