LXC container's disks persistence

degudejung

Member
Jun 17, 2021
13
0
6
24
Hi,
first of all, thanks for providing such an amazing piece of software for free. I really enjoy running Proxmox in my homelab and would certainly chose it over TrueNas, OMV again! Now, there is one issue that I am worried about:

Last night I had to restore an LXC container from backup. Backup and restore worked just fine. But now, the three disks I had mounted to the CT are completely wiped and empty. That is particularly ugly, since they are/were the TimeMachine Backup volumes for all the Mac PCs in the house.

Here's what the configuration was and what I did:

/etc/pve/lxc/100.conf
Code:
arch: amd64
cmode: shell
cores: 2
features: nesting=1
hostname: ct-webmin
memory: 512
mp0: pool1s1:vm-100-disk-1,mp=/srv/smb_temp,size=200G
mp1: pool1s1:vm-100-disk-2,mp=/srv/smb_tm_n,size=600G
mp2: pool1s1:vm-100-disk-3,mp=/srv/smb_tm_a,size=400G
mp3: /pool1h1/software,mp=/srv/smb_software
[..other mountpoints for all the ZFS datasets I want to share via SMB..]
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.10.1,hwaddr=6E:21:EB:39:18:CB,ip=192.168.10.22/24,type=veth
onboot: 1
ostype: debian
rootfs: pool1s1:vm-100-disk-0,size=3G
startup: order=1,up=30
swap: 512
unprivileged: 1
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 100
lxc.idmap: u 1000 1000 4
lxc.idmap: g 100 100 1
lxc.idmap: u 1004 101004 64532
lxc.idmap: g 101 100101 64535
lxc.idmap: g 65534 165534 1

Maybe that's relevant: I had to play around with UID and GID mapping quite a lot in order to make the ZFS volumes available they way I want it. That was only recently.

Also, I see in that config, that I do not have a "backup=1" for the disk-1 .. disk-3 set. I did that because I want to perform the backups for those disks separately from the CT-backup. And, yes, I didn't quite get there yet, so no backups for those disks...

When, last night, I wanted to restore the CT from its backup. I could not restore it to the same ID (100), which makes sense. So I removed the CT first and then restored from backup to its original VM-ID.

My understanding was (and kind of is...) that when I create an LV and mount it to a CT as a non-OS file system, then it has "a right exist on its own" rather than being an appendix to a CT that lives and dies with it. Or at least, when I kill the CT I should be asked whether I want to kill the disks with it.

But looking at the output of lvdisplay makes me fear just the worst:
Code:
  --- Logical volume ---
  LV Path                /dev/vg1s1/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                vg1s1
  LV UUID                ND69Nn-c5qH-s7Bk-c2U1-AT36-Gawy-ykvf7f
  LV Write Access        read/write
  LV Creation host, time pve1, 2022-10-25 22:18:19 +0200
  LV Pool name           pool1s1
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Mapped size            46.92%
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     2048
  Block device           253:6
  
  --- Logical volume ---
  LV Path                /dev/vg1s1/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                vg1s1
  LV UUID                joBTpP-uJEe-3yXD-xTVP-rZ4s-1v2q-NHRIHr
  LV Write Access        read/write
  LV Creation host, time pve1, 2022-10-25 22:18:20 +0200
  LV Pool name           pool1s1
  LV Status              available
  # open                 1
  LV Size                200.00 GiB
  Mapped size            2.09%
  Current LE             51200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     2048
  Block device           253:7
  
  --- Logical volume ---
  LV Path                /dev/vg1s1/vm-100-disk-2
  LV Name                vm-100-disk-2
  VG Name                vg1s1
  LV UUID                sznZbK-5GlU-tXdY-gNgW-nFoz-uLpS-EUOoux
  LV Write Access        read/write
  LV Creation host, time pve1, 2022-10-25 22:18:23 +0200
  LV Pool name           pool1s1
  LV Status              available
  # open                 1
  LV Size                600.00 GiB
  Mapped size            1.76%
  Current LE             153600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     2048
  Block device           253:16
  
  --- Logical volume ---
  LV Path                /dev/vg1s1/vm-100-disk-3
  LV Name                vm-100-disk-3
  VG Name                vg1s1
  LV UUID                XRFKMJ-6PS3-V20b-avMN-ImtK-cDGf-EbfCNb
  LV Write Access        read/write
  LV Creation host, time pve1, 2022-10-25 22:18:29 +0200
  LV Pool name           pool1s1
  LV Status              available
  # open                 1
  LV Size                400.00 GiB
  Mapped size            1.84%
  Current LE             102400
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     2048
  Block device           253:17

So "LV Creation host time" was last night for all the disks. Does that really mean that I accidentally deleted the disk-1 .. disk-3 and all its content with it and then re-created it with the restore process?

Or - I would prefer that so much - did I make the disks content invisible to the guest/CT while playing around with GID/UID mapping?

Please, tell me it's the latter and how I can fix it! I just told my wife she can totally rely on our new backup system that is so much better than the old Apple TimeCapsule...

Thanks!
 
thx @Dunuin I was afraid someone would say that.

But while we're at it... How would I do it right next time?

Again, in my logic, the disk/volume lives on its own and must not die only because it happened to be connected to a VM/CT that got removed. I thought that was the whole meaning of VMs/CTs - that they are basically disposables, while your data persists on volumes that are NOT rooted inside that machine. It works fine with the ZFS datasets I mount into the same CT. I just don't want to use ZFS on an SSD (single drive, consumer grade, not too much RAM available...) but would rather stick with a simpler solution with less overhead, like a LVM-thin LV.

So, I am forced to use ZFS or even bind mount a directory to to VM/CT? That can't be it, right?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!