[Feature Request] proxmox as ram live system

Xyz00777

Active Member
Apr 4, 2019
21
2
43
26
Hi everyone,

As already mentioned in this thread, I would really like to see an option in Proxmox where I could use, for example, a USB stick where everything is stored on and at boot time, it gets loaded to a RAM disk, similar to how ESXi operates.

An option for backward compatibility would be to choose at install time if you want to use the RAM version or the full disk version.

Recently, I encountered a significant issue where my boot drive completely failed due to excessive write operations. With an option like this, I would have been able to save the data from the RAM disk and keep my system running until I decide to shut it down. As it stands, I now need to replace the drives and rebuild everything from scratch. Fortunately, my critical data was not affected by this failure, because they were on another drive.

Advantages:
  1. Improved Performance:
    • Reduced latency and improved system responsiveness due to operations being performed in RAM.
    • heavy disk operations dont slow down web gui
  2. Enhanced Longevity of Storage Media:
    • Reduced wear and tear on the storage device (e.g., USB stick or SSD) as read and write operations are minimized after boot.
  3. Increased Reliability:
    • Decreased risk of disk failures during operation since the system primarily runs from RAM.
    • Potentially less data corruption as the system disk is not continuously accessed.
    • In cases of drive failure, data in RAM can be saved, allowing the system to continue running.
Disadvantages:
  1. Memory Usage:
    • Increased RAM usage since the entire system needs to be loaded into memory.
  2. Data Volatility:
    • Any changes made during operation need to be explicitly saved back to the storage device; otherwise, they will be lost on reboot.
    • e.g. Logs shouldn't be written back but configs and application updates should be.
  3. Complexity in Development/Implementation:
    • Requires development and maintenance of a robust mechanism to ensure changes are reliably saved back to the storage device and reloaded correctly on boot.
  4. Initial Boot Time:
    • The initial boot time might be slightly longer as the entire system needs to be loaded into RAM, although this is offset by faster performance thereafter.
I believe these advantages significantly outweigh the disadvantages, and implementing this feature would be a valuable addition to Proxmox, providing users with greater flexibility and performance options. Thank you for considering this feature request in advance.

Best regards,
Xyz00777
 
Meanwhile this gets implemented or not (IMHO would require significant changes):

- Use proper enterprise drives in a mirror configuration. Or at least not the cheapest/oldest ones you can find.
- Do backups for your logs, configs, VMs, as any drive can fail unexpectedly.
- Monitor your SSD drives wearout and smart values to predict if the drive may fail soon. Some of that information is exposed in the webUI.
- Monitor your disk stats and find out what is writing to your disks.
 
Hi Victor, I'm aware of it, that it's not a minor change and will take some time to get developed, but I believe it's the correct direction where it should go and would make proxmox mutch more resiliant and performant.

Maybe it would also be possible to save specific config and application versions at the same time so if a specific config or application version change (update) breaks something in proxmox it would be mutch simpler to perform an rollback.

Yeah I thought I have bought good drives but it looked like I didn't bought good ones :/
 
a USB stick where everything is stored on and at boot time, it gets loaded to a RAM disk, similar to how ESXi operates.

Correct me if I am wrong, but even they do go away from this approach and highly recommend against SD cards / USB media and do not support/certify new hardware with them as boot devices:

There are obviously reasons, why (big) (enterprise) solutions go away from this approach.
iXsystems, for another example, meanwhile also discourages the use of USB sticks as boot device.

Imho, like @VictorSTS already said, just use proper hardware for its job and use the possibilities that are already there, like redundancy. Furthermore make backups of the configuration files and/or have at least all well documented and monitor your whole system.

There also exists Unraid, which uses exactly (and exclusively) this approach. It is mainly a NAS solution, but can also run VMs (and Docker containers). Maybe you might want to have a look at it...
 
just use proper hardware for its job

When I use proper hardware in proper setup I tend to boot even my regular servers over (i)PXE into RAM-disks. Everyone on this forum is highly pro and uses triple mirrors for OS drive, all with PLP just so the config.db with WAL can be flushed perpetually onto them, that all on multi-node clusters with plentiful ECC RAM and UPS with backup power generators while deploying it all by ISO-installing everything locally, all of the time? That all while maintaining HA shared storage for all of the nodes already? You may not like the reason given by the OP, but it's a reasonable request, so was unattended install and it took 10 years to implement. Meanwhile the SLC SD cards and their redundant SD modules became obsolete, yes.
 
we are in 2024, you can use small m2 nvme dc grade with plp.

like DC1000M U.2 NVMe SSD. (50€ for 1TB)

you can also use them in hardware raid1 . (Dell, Hp, supermicro,lenovo,... have all small internal controllers with 2 nvme in raid1).



Proxmox use a distributed/replicated filesystem for /etc/pve/, with corosync to handle the locks/messaging. So Ram livesystem will never happend.
 
we are in 2024, you can use small m2 nvme dc grade with plp.

Which one (with PLP and 2280) would that be?

like DC1000M U.2 NVMe SSD. (50€ for 1TB)

This is a typo, obviously?

Proxmox use a distributed/replicated filesystem for /etc/pve/, with corosync to handle the locks/messaging. So Ram livesystem will never happend.

So the dev team is inept to fix the design for post 2024?
 
To which one are you referring? I mentioned several ones, all of which I think have PLP (some i know for sure).
Oh sorry, I looked at the replied to (Lexar). Yes but the issue with (the only ones I know!) Micron 7400 and 7450 is that e.g. 7450 ends for 2280 at 1T and that's not the same specs as the larger cousins, in terms of writes:

https://www.micron.com/content/dam/...keting-brief/7450-nvme-ssd-tech-prod-spec.pdf

I literally only know of this one Micron to be 2280 PLP and that's it. And lots of people on the forum (arguably if OP run it off USB, he would be in that category) only have that 2280 option (or SATA). There's DC600 for that but it's not exactly impressive either in 2024.
 
If @Xyz00777 really wanted to do this, I don't think it's impossibly difficult to implement. It doesn't sound like a great solution for a production-ready system, as you are introducing a lot more non-standard moving parts that can result in lots of frustration when things break. But if you are willing to bear the maintenance cost, the feature shouldn't be insanely difficult to implement.

It also requires having copious amounts of RAM. By the time you are done buying all that RAM, you might realize that it would have made more sense buying better drives. But on a case-by-case basis, I could see this working out OK.

If I was going to try this, I'd implement all the functionality as a new hook script in the initramfs. In fact, I have done a similar thing for IOT devices before. But in that case, I didn't care about being able to write-back any of the changes. So, that simplified things. Also, that code was a bit of a quick hack. I link to it below, but I am not particularly proud of the quality or readability. It probably needs to be rewritten.

For these IOT devices, I have an initramfs script that upon booting decides whether to run in ephemeral or in persistent mode. In the latter case, I just boot a regular system. The root file system is read-write enabled, and all changes immediately go to my device. This is the mode I boot into, whenever I need to make persistent changes (e.g. upgrade the system).

But when booting in ephemeral mode, I mount the SD card as read-only, and then put an overlayfs on top. With some bind-mount trickery, this becomes the new root filesystem. From now on, everything runs in RAM. And in fact, with a little bit of effort, I can carve out part of the filesystem that remain writeable (e.g. my home directory). This is perfect for IOT purposes, where I want to minimize disk writes, but not eliminate them completely.

So, this is 90% of where OP wants to go. The part that is missing is automatic write-back upon system shutdown. That is doable, but requires a little more care, as you can't directly write to the backing store for an overlayfs. But if you formatted your root filesystem so that it can have snapshots (e.g. ZFS or btrfs), then I can see this work. In the initramfs, create a new read-only snapshot, and then put the overlayfs on top of this snapshot. For ZFS you'd use the invisible "/.zfs/snapshot/" directory to do this. I don't know how this works with btrfs.

Keep the read-write portion of this filesystem (i.e. the non snapshot part) accessible somewhere else. Then upon system shutdown, use "rsync" to copy from the overlayfs back into the life filesystem. You can even make this so that upon power-failure, you reboot into the old snapshot. This means, you are relatively resilient in case of crashes.

For a general idea of what to do, look at: https://github.com/gutschke/overlayfs I am not convinced that it works out of the box with Proxmox. As I said, it was a bit of a quick hack. So, try in a VM first. You don't want to end up with a system that can't be booted. If you find that you can't boot because you broke your initramfs, you can usually fix things by editing the kernel's commandline from the bootloader. Remove the "boot=overlay" option.
 
It also requires having copious amounts of RAM. By the time you are done buying all that RAM, you might realize that it would have made more sense buying better drives. But on a case-by-case basis, I could see this working out OK.

Fresh install du -sh / shows 2.8G. My partition usually has 8G. But maybe I have misunderstood.
 
It all depends on what you want to store. If it is just the Proxmox system, you probably won't incur too much RAM usage. If it also includes your VMs and containers, then the amount of required RAM can quickly balloon.

Any changes that you make to your files will now go into RAM and you can't really get them out of RAM until the next reboot. That means, an "apt update" or an "apt dist-upgrade" can quickly eat up a gigabyte or maybe even two. And there certainly are other things that can eat up space on your RAM disk.

If you keep your VMs and containers on separate storage, and if you have a spare 8GB to 16GB, I feel this is reasonably safe to do, and you won't compromise your system too badly running into emergency out-of-memory situations. But if you have less memory, you need to be careful (I regularly encounter this problem with my Raspberry Pi's, when I am not paying attention before an "apt dist-upgrade"). And if you want your containers or VMs to run in RAM, then you better have hundreds of gigabytes.
 
Last edited:
It all depends on what you want to store. If it is just the Proxmox system, you probably won't incur too much RAM usage. If it also includes your VMs and containers, then the amount of required RAM can quickly balloon.

Alright, I somehow assumed the reference to USB stick and ESXi was relating to the hypervisor alone (and the storage is shared). I also assumed the OP was referring to the shredding writes of config.db flushes onto a non-PLP SSD. (But if this was ZFS then write amplification, etc. could have added up.)

Running VMs off RAM would be more trivial yet. But I somehow do not think this was intended by the OP.

Any changes that you make to your files will now go into RAM and you can't really get them out of RAM until the next reboot. That means, an "apt update" or an "apt dist-upgrade" can quickly eat up a gigabyte or maybe even two. And there certainly are other things that can eat up space on your RAM disk.

I run Linux machines on 8G partitions for ages and very rarely did I need to mount an extra /tmp.

If you keep your VMs and containers on separate storage, and if you have a spare 8GB to 16GB, I feel this is reasonably safe to do, and you won't compromise your system too badly running into emergency out-of-memory situations. But if you have less memory, you need to be careful (I regularly encounter this problem with my Raspberry Pi's, when I am not paying attention before an "apt dist-upgrade").

I understand this with the IOT, but that's fairly different use case with they overlayfs and read-only SD to literally save it from just regular logging . You can't really add RAM to a Pi. But on a server, what's 8G really?

And if you want your containers or VMs to run in RAM, then you better have hundreds of gigabytes.

This is totally dependent on the workload, but I completely did not consider that was part of the question, maybe wrongly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!