error move lxc root disk from local to ceph

ilia987

Active Member
Sep 9, 2019
273
13
38
36
i try to move the lxc root disk from local to ceph (have over 50% free space on ceph)
this is the log for the error:

Code:
/dev/rbd3
Creating filesystem with 18350080 4k blocks and 4587520 inodes
Filesystem UUID: e55036f9-7f8a-4a49-af36-7929f96043cd
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424
rsync: write failed on "/var/lib/lxc/208/.copy-volume-1/home/local/2020-09-10/002195.ldb": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]
Removing image: 1% complete...
Removing image: 2% complete...
...

Removing image: 99% complete...
Removing image: 100% complete...done.
TASK ERROR: command 'rsync --stats -X -A --numeric-ids -aH --whole-file --sparse --one-file-system '--bwlimit=0' /var/lib/lxc/208/.copy-volume-2/ /var/lib/lxc/208/.copy-volume-1' failed: exit code 1

update, had same error moving this lxc from local to another network storage
 
Last edited:
rsync: write failed on "/var/lib/lxc/208/.copy-volume-1/home/local/2020-09-10/002195.ldb": No space left on device (28)

Can you confirm that you have enough space in your ceph storage?

rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]
the 11 code error means that you don't have enough space

Could you post the config of the lxc container pct config 208
 
Can you confirm that you have enough space in your ceph storage?


the 11 code error means that you don't have enough space

Could you post the config of the lxc container pct config 208

Code:
arch: amd64
cores: 96
hostname: svr-ub-108
memory: 401408
mp1: <hiddedn>
mp2: <hiddedn>
mp3: <hiddedn>
mp4: <hiddedn>
mp5: <hiddedn>
mp6: <hiddedn>
mp7: <hiddedn>
mp8: <hiddedn>
net0: name=eth0,bridge=vmbr0,gw=<hiddedn>,hwaddr=<hiddedn>,ip=<hiddedn>,type=veth
net1: name=eth1,bridge=vmbr1,hwaddr=<hiddedn>,ip=<hiddedn>,type=veth
onboot: 1
ostype: ubuntu
rootfs: pve-blade-108-internal-data:subvol-208-disk-0,size=70G
startup: order=4
swap: 0
 
```
rootfs: pve-blade-108-internal-data:subvol-208-disk-0,size=70G
```
You can check how the volume `pve-blade-108-internal-data` is configured in `/etc/pve/storage.cfg`
 
```
rootfs: pve-blade-108-internal-data:subvol-208-disk-0,size=70G
```
You can check how the volume `pve-blade-108-internal-data` is configured in `/etc/pve/storage.cfg`
Code:
zfspool: pve-blade-108-internal-data
    pool pve-blade-108-internal-data
    content rootdir,images
    nodes pve-blade-108
 
This is likely the same problem we have experienced.

When moving to cephfs, it will suddenly use ext4, but the original ZFS has compression. Since proxmox will create a volume with the same outer (!) size, it suddenly wont fit. Say, if the original volume has `100G outer size`, due to compression you might have `120G used with 20G still available`. But when moving to ceph, proxmox would create a 100G ext4 volume on ceph without compression, and proxmox is unable to fill the data in.

You can use `zfs get used pve-blade-108-internal-data` (also consider `zfs get all pve-blade-108-internal-data` for more information) to see how much space is used inside. Then, first resize your ZFS volume such that this would fit without compression (plus some extra, otherwise you will end up with a completely filled volume), and then attempt to move to ceph again. In my previous example, you'd want to resize the original zfs volume first to say 150G, such that Proxmox would create a 150G ext4 volume on ceph, enough to fit the 120G data in.

Hope this makes sense.
 
Last edited:
  • Like
Reactions: ilia987
This is likely the same problem we have experienced.

When moving to cephfs, it will suddenly use ext4, but the original ZFS has compression. Since proxmox will create a volume with the same outer (!) size, it suddenly wont fit. Say, if the original volume has `100G outer size`, due to compression you might have `120G used with 20G still available`. But when moving to ceph, proxmox would create a 100G ext4 volume on ceph without compression, and proxmox is unable to fill the data in.

You can use `zfs get used pve-blade-108-internal-data` (also consider `zfs get all pve-blade-108-internal-data` for more information) to see how much space is used inside. Then, first resize your ZFS volume such that this would fit without compression (plus some extra, otherwise you will end up with a completely filled volume), and then attempt to move to ceph again. In my previous example, you'd want to resize the original zfs volume first to say 150G, such that Proxmox would create a 150G ext4 volume on ceph, enough to fit the 120G data in.

Hope this makes sense.
i think you are correct here is my actual size

Code:
zfs get used pve-blade-108-internal-data
NAME                         PROPERTY  VALUE  SOURCE
pve-blade-108-internal-data  used      78.0G  -

78 used when allocating 70 ill try to extend it and move it again once the server become available
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!