running proxmox in RAM

tbol.inq

Member
Jul 8, 2016
5
0
21
37
Hi there,

Is it possible to boot a proxmox installation (installed on a usb stick) and run it in RAM (ramdisk, tmpfs, etc.) like VMware ESXi does?

I thought about something like:
  • load from usb
  • create fs in RAM
  • copy from usb to RAM
  • work on fs in RAM
  • write back all content to usb on shutdown

kind regards,

tbol.inq
 
Booting from usb is not a problem and all of the described things except the last one is also easy (ordinary live linux). Yet the last step involves a lot of scripting. A simple copy in the end should be a sync every $whatyoudefine minute/hour or so. It should be possible, yet the effort to do this is non-negligible.

A working setup could involve puppet or some other orchestration tool to push the config to the node after it was booted and requested integration.

Do you want a single server or a cluster?
Where goes the VM-data? If it is networked, then create a network bootable Proxmox version with dedicated NFS storage, so that you have nothing on your machine. This is simpler than the USB solution and more reliable (USB sticks are not made for eternity).
 
Thx 4 your reply. I also think that it wouldn't be that much to enable this except the last step. For the write-back to usb I would use rsync but I need to know what folders are used by proxmox beginning from root (/).

I am thinking about a 2 node cluster with HA capabilities or a 3 node cluster, as far as I've read it is highly recommended for HA features to use at least a 3 node cluster.

The VM Data will be stored on a set of local disks passed through to a freeNAS VM.


To avoid the loss of the usb stick, for that you mentioned not to use I would backup the usb stick using dd ;)
 
Thx 4 your reply. I also think that it wouldn't be that much to enable this except the last step. For the write-back to usb I would use rsync but I need to know what folders are used by proxmox beginning from root (/).

In a cluster, you have a cluster filesystem which is stored in a sqlite database.

I am thinking about a 2 node cluster with HA capabilities or a 3 node cluster, as far as I've read it is highly recommended for HA features to use at least a 3 node cluster.

HA and therefore quorum can only be acquired with 3 nodes, nothing less, sorry.

The VM Data will be stored on a set of local disks passed through to a freeNAS VM.

If I understand you correctly, you have the storage on one node and want to export it? What if this node fails? This is no HA setup. You need a "real" shared storage that the data is stored on different hosts. You can use DRBD, Ceph, GlusterFS, ... for that, yet the data has to be distributed in your cluster.

To avoid the loss of the usb stick, for that you mentioned not to use I would backup the usb stick using dd ;)

I'd install ZFS and send/receive changes to replicate. Much easier backup and restore.
 
Hello there,

I would be very interested to know if you achieve your project.
Actually, I made a 3 nodes cluster installation with CEPH and a CEPH require entire disks and my servers are hosted by ovh I can't easily add disk, the PVE os size usually under 3Go, it's frustrating to have to dedicate a whole disk...
So the only solution for me was to go with USB key installation...
I choosed ZFS Raid1 from the installer and everything is working very well except :

1- It's slow, web interface unresponsive and, often timeouts in the CEPH section... is it due to ceph itself or to usb install ?

2- I fear this setup will slow down not only the hosts (witch I can live with) but the guests too : maybe some logs writes or any writes that need to be done for an operation can complete... (about that can someone tell me if it would be safe an useful to put /var/log in ram fs in addition with a syslog server for exemple ?)

3- I use 2 USB keys, often scrub with zfs to be sure everything is ok but I know usb life reputation and that's why I set a ZFS RAID1 but if a problem occur, I don't know how I will be able to tell the guy in the datacenter witch key is to removed and get half a chance that he unplug the good one... a little bit silly :(

With your solution, in case power failure or kernel crash, the files cannot be written back to usb, the solution is to reboot every time important changes or upgrades are done, why not... but about the CEPH part, I don't know CEPH well enough to tell if there is important files that a written on the disks and not replicated among other nodes during normal fonctionnement ?

My idea was maybe no going with a full RAM running as you want to do but maybe the most consuming directories only... can someone tell me what these directory would be ? (I think of logs and rrd graphs in which, by the way, I can see gaps sometimes, is there others)

To finish, does somebody has recommendations for my use case ? (please don't tell me not using usb, I have no choice)

For exemple, searching google, I found this manipulation, do someone has an opinion about it ? I think it could be a good compromise :

"This tip can also lead to data loss. If you do it, you will have to always shut down your computer properly from now on, because unexpected power failures will lead to data loss.

Linux usually ensures that all changes are written to disk every few seconds. Since disk writes are so slow, you can change your system to keep things in memory longer. All changes will be written to memory, and the excruciatingly slow writes to happen in the background while you continue working. This has an instant, noticeable effect, but it can lead to data loss.

Add these lines to /etc/sysctl.conf, and reboot.

vm.swappiness = 0
vm.dirty_background_ratio = 20
vm.dirty_expire_centisecs = 0
vm.dirty_ratio = 80
vm.dirty_writeback_centisecs = 0

The problem: using this tip means that your system stops writing changes to disk until you shut down or type "sync" at a command line. If your system loses power unexpectedly, you will get bad blocks. I did. You can limit the amount of data loss in the event of a power failure to one minute by setting vm.dirty_writeback_centisecs = 6000."

Thank you for reading me and any advice or suggestion
regards
 
Hi, I thing so, that the swappiness works a little bit other, as here in many threads written! So CAN BEEN, that swappiness will been change, when the host reboot, but an other feature is, that the swappiness for lxc containers can been change without a host restart!

In first, I have look in the host about the actual swappiness:
cat /proc/sys/vm/swappiness

and get 60 - thats the standard and this is not nice for hosts and absolute bad for ssds. We lost in 2 weeks 2 % wearout!
So I have set about to test and this brings nothing:

sysctl -w vm.swappiness=10
sysctl -w vm.swappiness=1
sysctl -w vm.swappiness=0

So, then I search and find, that all lxc containers has the 60 again! So we can change from host for the lxc container:

lxc-cgroup -n VMID memory.swappiness=10 or 1
lxc-cgroup -n 109 memory.swappiness=1

So, then we can control in host, that this setting was set and nice other parameters to optimize the containers too!

cat /sys/fs/cgroup/memory/lxc/109/memory.swappiness

and get the new settings! But we will check in container and get not a good result:

cat /sys/fs/cgroup/memory/memory.swappiness

and we get 60!

Ok, when we can reboot the host, then works too:
sysctl -w vm.swappiness=1

but not until the reboot! And for every optimization reboot the host is strange too!

So, I am dont know, that the 60 in the container wrong or that they show only 60, but too and works internal with 1, I dont see, that the swap in the host goes done from 100% to expl 95% or so and this for long time! What we can do and this works perfect is: set in LXC-conf-file, add a line in /etc/pve/lxc/VMID.conf, expl. so:

lxc.cgroup.memory.swappiness: 1

save and reboot the lxc container! Now not in every OS, but expl. in debian, we can see IN THE CONTAINER, swappiness is correct set:

cat /sys/fs/cgroup/memory/memory.swappiness

and we get 1! In diferent OS, the cgroup is not mounted, to see the infos in /sys/fs !!!

So, then I am not see, that swap is going higher in the container and after the reboot, I see that the host goes up from 100% swap to
90% - So the other 90 % comes from other LXC-Containers! We have enough RAM free and we dont like, that when 50 % is only in use (and other 50% for cache, that the filesystem is every time on high load!

I see discussion, that for a server swappiness 10 is better and I see too, about the long-life of SSDs, when it will been use for swapping arrea, that 1 is better for the swappiness! So with 1 the swap will only been used, when is not enough RAM available!

Now I can control every container and can set individual 1 or 10 and other limitation too, reboot the lxc-container and get direct an good result, without a longer restart time of the host!
 
i installed a minimal version of windows 11 and then virtual box and installed the virtual box proxmox into ram. https://www.youtube.com/watch?v=R9zEIJiy5Uk in the virtual box i had to forward port 8006 using this method

Forwarding Host Port to Guest Port in VirtualBox​

Here's how to forward a host port to a guest port in VirtualBox:

Prerequisites:

  • VirtualBox application installed and running.
  • Your virtual machine must be shut down.
Steps:

  1. Select Virtual Machine: In the VirtualBox Manager window, highlight the virtual machine you want to configure port forwarding for.
  2. Open Settings: Click the "Settings" button located in the top menu of the VirtualBox Manager window.
  3. Network Tab: In the Settings window, select the "Network" tab from the left-hand pane.
  4. Adapter Configuration: Under the "Network" tab, you'll see one or more network adapter configurations. Typically, there will be an "Adapter 1" option.
  5. Attached To: Make sure the "Attached To" dropdown menu is set to "NAT" (Network Address Translation).
  6. Advanced Button: Click the "Advanced" button located on the right side of the "Adapter 1" section.
  7. Port Forwarding: In the "Advanced" settings window, click the "Port Forwarding" button.
  8. Add Rule: Click the "+" icon (plus sign) on the right side of the window to add a new port forwarding rule.
  9. Rule Configuration: In the new rule configuration:
    • Name: Enter a descriptive name for the rule (e.g., "Proxmox Web GUI").
    • Host Port: Enter the port number you want to access on the host machine (e.g., 8006).
    • Guest Port: Enter the port number where the service is listening in the guest machine (e.g., 8006).
    • Protocol: You can choose TCP or UDP depending on the service's communication protocol (typically TCP).
    • Leave the "IP address" fields blank. By leaving them blank, the rule applies to any IP address on the host machine.
  10. OK Clicks: Click "OK" on both the "Port Forwarding Rules" and "Advanced" settings windows.
  11. Start Virtual Machine: Click the "Start" button in the VirtualBox Manager window to start your virtual machine.
Accessing the Service:

Once your virtual machine is running, you should be able to access the service running on the guest machine's port by accessing the corresponding port on the host machine. In your example:

  • Host Port: 8006
  • Guest Port: 8006
You would access the Proxmox VE Web GUI by opening a web browser on your host machine and navigating to:

https://localhost:8006

Important Notes:

  • Ensure the service in the guest machine is actually listening on the specified port (8006 in this case).
  • If you have multiple network adapters configured in the virtual machine settings, make sure you're forwarding the port on the correct adapter.
  • Firewall rules on both the host and guest machines might prevent access. Make sure the relevant ports are allowed through any firewalls.
By following these steps, you should be able to successfully forward a host port to a guest port in VirtualBox and access the service running on the guest machine i used this https://download.microsoft.com/down...2-4A7E-AA03-DF6C4F687B07/dgreadiness_v3.6.zip to disable hyperv so virtual box gave the cpu to proxmox with this DG_Readiness_Tool_v3.6.ps1 -Disable and now the proxmox is running in ram completely. it is also handy for restarts etc.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!