LXC Container clone to ramdisk

alexgoaga

Member
Mar 19, 2021
9
0
6
31
Hello,
Im trying to clone a small lxc to a container location ( i dont care if i do it manually each time etc, i need the speed and the ssd to be saved. The container has 8GB of disk but in reality is ~1GB)

I've tried my own idea :

Code:
On proxmox terminal :
mkdir /tmp/ramdisk
chmod 777 /tmp/ramdisk



nano /etc/fstab
    insert this :
    myramdisk  /tmp/ramdisk  tmpfs  defaults,size=10G,x-gvfs-show  0  0

mount -a to be mounted.


On webpage, on datacenter storage add -> Directory "Ram" /tmp/ramdisk  -> content i selected everything

Stop the LXC container ,
Clone to new ram disk

i receive this error.

Code:
create full clone of mountpoint rootfs (local-zfs:subvol-1003-disk-0)
Formatting '/tmp/ramdisk/images/101/vm-101-disk-0.raw', fmt=raw size=8589934592
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: 97321245-6ed4-4bae-9499-cfdea510a206
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
mkfs.ext4: MMP: open with O_DIRECT failed while writing out and closing file system
TASK ERROR: clone failed: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /tmp/ramdisk/images/101/vm-101-disk-0.raw' failed: exit code 1


How do i ....force it ...there....or move it there or copy it there?

Yes , folders are created there if tried something like the dd command to check the speeds

dd if=/dev/zero of=/tmp/ramdisk/zero bs=4k count=100000

Yes, i do know the risks but i need it to be done....without investing in another ssd


Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve-relaxablermrr)
pve-manager: 6.4-4 (running version: 6.4-4/337d6701)
pve-kernel-5.4: 6.4-1
pve-kernel-helper: 6.4-1
pve-kernel-5.4.106-1-pve-relaxablermrr: 5.4.106-1
pve-kernel-libc-dev: 5.4.106-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-2
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-1
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-3
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-1
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
Why don't you just use files without the additional ext4 layer? Easiest method is to use ZFS, so that you have a directory to store your files in and mount a tmpfs over the underlying one.

How do i ....force it ...there....or move it there or copy it there?
tmpfs does not support O_DIRECT, so you will always have this error and cannot create the container with the GUI. You will end up doing everything by yourself :-/

i need the speed and the ssd to be saved
buy better SSDs ... with normal buffer cache (ext4) or ZFS (ARC) you can also have everything in ram, just read it once.


What kind of data is this so that you need everything in RAM? If this is "own data", so no os stuff (which is often just read once), just bind-mount a tmpfs in the container.
 
I dont understand what are you saying by "Why don't you just use files without the additional ext4 layer?"

I would buy some...ssds but still not right now

The data is on a proxy like server that receives data from another server(download) and sends it to another server chunks
How would i "just bind-mount a tmpfs in the container."


I did something that im not proud of.

Code:
Clone the VM to local
Move the DISK to a remote random cifs to create the .raw data
Mount the remote cifs to proxmox

copy the .raw from the images to the ram disk, in my case
cp -r "/media/xxxxxxx/images/5003/" "/tmp/ramdisk/images/5003/"

Snake atack the conf of the file
nano /etc/pve/nodes/xxxxxxxx/lxc/5003.conf
Edit the storage data from the name of the location of moved data, to the ram storage.
 
just an idea :

Long time ago, i was doing a boot on san for my workstation. In order to speed the reactivity after booting; the disk of the os was on raid 1 software ( mdadm ) with a hardrive and the other side was a ramdrive. With the write behind option, the read was always made from the ram and the write was to write and ram.

actually i use the same process on my server ( Ram replace with NVME, and hard drice replace with SSD). ( Nvme and ssd )

md0 : active raid1 nvme0n1p2[0] sda2[1](W) 19513344 blocks super 1.2 [2/2] [UU]

you see the (W) for the write behind

with this nothing is lost if poweroff
 
Last edited:
I dont understand what are you saying by "Why don't you just use files without the additional ext4 layer?"
You're using a raw file, so you have your disk, a filesytem on top of that, then you have your ramdrive, raw file on top and then another filesystem in which your files are.

ZFS and btrfs are the only option to circumvent that so that you get rid of the raw files and the additional filesystem. If you would just have files, you'd need to bind-mount a tmpfs (ramdrive) and rsync your machine onto it and just use it. no fuss...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!