Import old disks

mcflux101

Member
Jul 3, 2021
18
0
6
33
Hi

I had issues upgrading my Proxmox from v7 to v8.

I now backed up all disks of the container to an external storage and want to import them now. But when I try to copy the disk it tells that the disk is full and there is no space left, even if there is nothing att all only proxmox itself. I'm a bit confused why there is no space left. Does proxmox reserve all space on the harddisk?

Also I cannot see any resource where I could assign new containers or VM to.

Help please, I'm really confused and just about installing v7 again to try it with that one.

kind regards
mcflux
 
I now backed up all disks of the container to an external storage and want to import them now. But when I try to copy the disk it tells that the disk is full and there is no space left, even if there is nothing att all only proxmox itself. I'm a bit confused why there is no space left. Does proxmox reserve all space on the harddisk?
You need to be more specific...
How did you back the disks up?
Where do you try to copy the disks to that it is complaining about not enough space?
What does your storages look like (outputs of cat /etc/pve/storage.cfg. lsblk. df -h, pvesm status. zpool list -v. zfs list -o space, lvs, vgs for example would help)
Also I cannot see any resource where I could assign new containers or VM to.
Thats usually the "local-lvm" or "local-zfs" storage.
 
You need to be more specific...
How did you back the disks up?
Where do you try to copy the disks to that it is complaining about not enough space?
What does your storages look like (outputs of cat /etc/pve/storage.cfg. lsblk. df -h, pvesm status. zpool list -v. zfs list -o space, lvs, vgs for example would help)

Thats usually the "local-lvm" or "local-zfs" storage.
I dod copy the configs in /etc/pve and also all disks in /dev/ that were linked in /dev/pve. i though i could copy them back on the new installation and create the symbolic links again

the message i get when copy the files to pver ssh but also when i connect the diskdrive directly and use the "cp file directory" command.

here my `storage.cfg`:
Code:
dir: local
    path /var/lib/vz
    content iso,backup,vztmpl

lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images

rbd: default_pool
    content images,rootdir
    krbd 0
    pool default_pool

`lsblk` output:
Bash:
NAME                                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                           8:0    0 931.5G  0 disk
├─sda1                                        8:1    0  1007K  0 part
├─sda2                                        8:2    0     1G  0 part
└─sda3                                        8:3    0 930.5G  0 part
  ├─pve-swap                                252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                                252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta                          252:2    0   8.1G  0 lvm 
  │ └─pve-data                              252:6    0 794.3G  0 lvm 
  └─pve-data_tdata                          252:5    0 794.3G  0 lvm 
    └─pve-data                              252:6    0 794.3G  0 lvm 
sdb                                           8:16   0 931.5G  0 disk
├─sdb1                                        8:17   0  1007K  0 part
├─sdb2                                        8:18   0   512M  0 part
└─sdb3                                        8:19   0   931G  0 part
  ├─pve--OLD--552BFCB7-data_tmeta           252:3    0   8.1G  0 lvm 
  │ └─pve--OLD--552BFCB7-data-tpool         252:7    0 794.8G  0 lvm 
  │   ├─pve--OLD--552BFCB7-data             252:8    0 794.8G  1 lvm 
  │   ├─pve--OLD--552BFCB7-vm--105--disk--0 252:9    0     8G  0 lvm 
  │   ├─pve--OLD--552BFCB7-vm--104--disk--0 252:10   0     8G  0 lvm 
  │   ├─pve--OLD--552BFCB7-vm--100--disk--0 252:11   0   120G  0 lvm 
  │   ├─pve--OLD--552BFCB7-vm--101--disk--0 252:12   0     8G  0 lvm 
  │   ├─pve--OLD--552BFCB7-vm--102--disk--0 252:13   0    64G  0 lvm 
  │   └─pve--OLD--552BFCB7-vm--102--disk--1 252:14   0    64G  0 lvm 
  └─pve--OLD--552BFCB7-data_tdata           252:4    0 794.8G  0 lvm 
    └─pve--OLD--552BFCB7-data-tpool         252:7    0 794.8G  0 lvm 
      ├─pve--OLD--552BFCB7-data             252:8    0 794.8G  1 lvm 
      ├─pve--OLD--552BFCB7-vm--105--disk--0 252:9    0     8G  0 lvm 
      ├─pve--OLD--552BFCB7-vm--104--disk--0 252:10   0     8G  0 lvm 
      ├─pve--OLD--552BFCB7-vm--100--disk--0 252:11   0   120G  0 lvm 
      ├─pve--OLD--552BFCB7-vm--101--disk--0 252:12   0     8G  0 lvm 
      ├─pve--OLD--552BFCB7-vm--102--disk--0 252:13   0    64G  0 lvm 
      └─pve--OLD--552BFCB7-vm--102--disk--1 252:14   0    64G  0 lvm

I think i have a mirror disk on teh system but I dont remember 100%. I would assume thats the sdb3

pvesm:
Bash:
got timeout
Name                Type     Status           Total            Used       Available        %
default_pool         rbd   inactive               0               0               0    0.00%
local                dir     active        98497780         2719792        90728440    2.76%
local-lvm        lvmthin     active       832888832               0       832888832    0.00%

`zpool list -v` => no pools available
`zfs list -o space` => no dataset available

`lvs`:
Bash:
 LV                              VG               Attr       LSize    Pool Origin                  Data%  Meta%  Move Log Cpy%Sync Convert
  data                            pve              twi-a-tz--  794.30g                              0.00   0.24                           
  root                            pve              -wi-ao----   96.00g                                                                     
  swap                            pve              -wi-ao----    8.00g                                                                     
  data                            pve-OLD-552BFCB7 twi-aotz-- <794.79g                              12.77  0.98                           
  snap_vm-100-disk-0_Init         pve-OLD-552BFCB7 Vri---tz-k  120.00g data vm-100-disk-0                                                 
  snap_vm-100-disk-0_Ready_210509 pve-OLD-552BFCB7 Vri---tz-k  120.00g data vm-100-disk-0                                                 
  snap_vm-102-disk-0_Init         pve-OLD-552BFCB7 Vri---tz-k   64.00g data                                                               
  snap_vm-102-disk-1_InstalledDev pve-OLD-552BFCB7 Vri---tz-k   64.00g data vm-102-disk-1                                                 
  snap_vm-104-disk-0_Basic_setup  pve-OLD-552BFCB7 Vri---tz-k    8.00g data vm-104-disk-0                                                 
  snap_vm-105-disk-0_Init         pve-OLD-552BFCB7 Vri---tz-k    8.00g data vm-105-disk-0                                                 
  vm-100-disk-0                   pve-OLD-552BFCB7 Vwi-a-tz--  120.00g data                         12.26                                 
  vm-101-disk-0                   pve-OLD-552BFCB7 Vwi-a-tz--    8.00g data                         23.54                                 
  vm-102-disk-0                   pve-OLD-552BFCB7 Vwi-a-tz--   64.00g data snap_vm-102-disk-0_Init 16.96                                 
  vm-102-disk-1                   pve-OLD-552BFCB7 Vwi-a-tz--   64.00g data                         94.92                                 
  vm-104-disk-0                   pve-OLD-552BFCB7 Vwi-a-tz--    8.00g data                         21.94                                 
  vm-105-disk-0                   pve-OLD-552BFCB7 Vwi-a-tz--    8.00g data                         25.94

`vgs`:
Bash:
  VG               #PV #LV #SN Attr   VSize    VFree 
  pve                1   3   0 wz--n- <930.51g  16.00g
  pve-OLD-552BFCB7   1  13   0 wz--n- <931.01g 119.99g


thank you. i hope the formating works fine.
 
Your virtual disks are LVM thin volumes. So you would have to create new same sized thin volumes on the new thin pool and then clone the contents on block level from the old thin volumes to the new thin volumes by for example using dd. Maybe you will find a tool that could help you with that. Not sure if for example clonezilla or gparted would allow you to clone/copy thin volumes between thin pools.
 
Your virtual disks are LVM thin volumes. So you would have to create new same sized thin volumes on the new thin pool and then clone the contents on block level from the old thin volumes to the new thin volumes by for example using dd. Maybe you will find a tool that could help you with that. Not sure if for example clonezilla or gparted would allow you to clone/copy thin volumes between thin pools.
I cant create a new thin pool since i cant select any disk:
1705867713963.png
 
Maybe I have to do that and the this will free up space? I dont understand why the volume uses up all disk space by default.
No, you can't shrink a thin pool.

I cant create a new thin pool since i cant select any disk:
1705867713963.png
You will have to do everything via CLI. You already got a thin pool (the "data" LV), no need to create a new one. And the webUI will only allow you to create a new thin pool by adding a new empty disk.

Your old "data" thin pool was 794.79G. the new one is 794.30G. So its not that much smaller and shouldn't matter when cloning the thin volumes.
 
Last edited:
But my "old" thin pool does not exist. I just copied the disks from /dev/dm-xx to a external disk. And now I cant copy them back becasue the current pool seem to occupy all teh disk space.

Did I something wrong during the new installation of Proxmos? was there an option in the advanced options during the installation configuring volumes?
 
here https://pve.proxmox.com/wiki/Installation there is something about the advanced options that you can specify a minFree space.
Should I maybe reinstall Proxmox and configure some free space there and then copy over and link it inside the /dev/pve? woulnt that mount the disks again in the pool?

Is it even possible adding the disks again? :( I hoped that was the only think apart from the config I have to backup. I had no access anymore to the UI.
Or maybe I could try using "Rescue mode" from the installation to boot the old Proxmox?
 
But my "old" thin pool does not exist. I just copied the disks from /dev/dm-xx to a external disk. And now I cant copy them back becasue the current pool seem to occupy all teh disk space.
How did you copy them? Those aren't files, those are block devices. you can't simply copy them like you would copy a file.
 
I copies them over like files. I did connect with SSH and downloaded them on a harddisk. So you can copy them like a file, but it seems its more complicated to get that back in then..

I wonder now if i can use the pve--OLD" somehow to restore the old state of my proxmox?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!