Mounting an existing RBD image

grindustrial

New Member
Aug 27, 2018
6
0
1
32
Recently I had to re-install Proxmox on my SSD's since replication is not supported in LVM, had to make "sort of a backup" of some files from a container that are around 250GB. To achieve that I mounted a disk using ceph storage, transferred the files to the storage, unmounted the disk, re-installed Proxmox on that node and created the container again using the same CID.
After that I can't figure out how to re-use the same disk that I created (vm-100-disk-1). The mount point option automatically sets it as vm-100-disk-2 with no option to use an existing one. I'm most likely missing some commands, but google didn't help out (or atleast I couldn't formulate the correct question).

Thank you in advance, anything helps!
 
To my knowledge, you aren't missing any commands. Your new install of Proxmox isn't aware of vm-100-disk-1, and since it doesn't want to be destructive (a big thank you to the devs for this!!), you will have to show Proxmox the way. You'll have to go to /etc/pve/qemu-server/ and edit the appropriate VM config file (guessing 100.conf). You'll see a line in there that specifies vm-100-disk-2 as the virtual disk to use. Change that to, I'm assuming you've labeled it vm-100-disk-1, save it, and then start your VM.
 
Thanks for your reply WhiteStarEOF, I gave it a try and the result ended up with the container unable to start with a typical error:
Code:
Starting PVE LXC Container: 100...
lxc-start: 100: lxccontainer.c: wait_on_daemonized_start: 815 No such file or directory - Failed to receive the container state
pve-container@100.service: Control process exited, code=exited status=1
Failed to start PVE LXC Container: 100
pve-container@100.service: Unit entered failed state
pve-container@100.service: Failed with result 'exit-code'
 
Can you post the contents of your 100.conf file? Also if you could do a pwd and ls -l in the directory where the virtual disks live (assuming these are .qcow2 files you're working with).
 
This is the config of the "working state".
Ignore the network configurations

Code:
arch: amd64
cores: 1
hostname: archive
memory: 512
mp0: ceph-storage-1:vm-100-disk-2,mp=/home/temp,size=250G
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=A6:58:31:17:B9:C0,ip=192.168.1.11/16,type=veth
ostype: ubuntu
rootfs: local-zfs:subvol-100-disk-1,size=10G
swap: 512


Code:
pwd
/etc/pve/lxc


Code:
ls -l
total 1
-rw-r--r-- 1 root www-data 296 Aug 27 11:18 100.conf
 
Drat. Ceph and containers. The gaps in my knowledge are showing. I guess the best guidance I can give is that:

* vm-100-disk-1 needs to be in the same location as vm-100-disk-2
* The naming convention needs to be the same. We can't have Proxmox looking for vm-100-disk-1 if the file is actually vm-100-disk-1.raw or .qcow2
* Double check permissions

Unfortunately I haven't used containers or ceph, so I don't know what the configuration file, or the data on disk should look like. :(
 
  • Like
Reactions: grindustrial
The images create are .raw format. I'll look into those and try my luck. Anyways, thanks a lot for your time to help! Maybe someone from the Proxmox team will see this and give some advice.
 
Recently I had to re-install Proxmox on my SSD's since replication is not supported in LVM, had to make "sort of a backup" of some files from a container that are around 250GB. To achieve that I mounted a disk using ceph storage, transferred the files to the storage, unmounted the disk, re-installed Proxmox on that node and created the container again using the same CID.

While this is closing the barn doors after the horses ran out, thats what vzdump is for. see https://pve.proxmox.com/wiki/Backup_and_Restore

After that I can't figure out how to re-use the same disk that I created (vm-100-disk-1). The mount point option automatically sets it as vm-100-disk-2 with no option to use an existing one. I'm most likely missing some commands, but google didn't help out (or atleast I couldn't formulate the correct question).

Thank you in advance, anything helps!

edit the config file manually- just be sure that
1. the proxmox storage ID is "ceph-storage-1"
2. the rbd you're trying to mount is called "vm-100-disk-2". if it isnt, edit the lxc config file (100.conf) to reflect the name of the actual rbd image.
 
Is there a possibility that the disk won't mount because of the
pveversion -v
mismatch? The image was created on 5.2-5 I believe, after re-installing the version is 5.2-7. Kind of a weird question, but maybe it has something to do with it. (or maybe ceph versions differ)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!