Unable to clone CT with MP0 at different storage

jparis

New Member
Dec 31, 2021
6
0
1
55
Hi,

I'm trying to clone a standar Debian 10 container:
Captura de pantalla 2022-08-15 a las 13.14.25.png
But I get this when issuing the clone, roofs is ok:

Code:
create full clone of mountpoint rootfs (local-lvm:vm-108-disk-0)
  Logical volume "vm-104-disk-0" created.
Creating filesystem with 5242880 4k blocks and 1310720 inodes
Filesystem UUID: 9b602fbe-a431-4730-8f6c-ea0cc22badbb
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

Number of files: 86,673 (reg: 66,096, dir: 16,489, link: 4,057, special: 31)
Number of created files: 86,671 (reg: 66,096, dir: 16,487, link: 4,057, special: 31)
Number of deleted files: 0
Number of regular files transferred: 66,077
Total file size: 3,943,235,765 bytes
Total transferred file size: 3,807,943,501 bytes
Literal data: 3,807,943,501 bytes
Matched data: 0 bytes
File list size: 2,686,827
File list generation time: 0.013 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 3,814,136,565
Total bytes received: 1,360,335

sent 3,814,136,565 bytes  received 1,360,335 bytes  133,877,084.21 bytes/sec
total size is 3,943,235,765  speedup is 1.03

But MP0 goes wrong:

Code:
create full clone of mountpoint mp0 (Raid5:vm-108-disk-0)
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "vm-104-disk-1" created.
  WARNING: Sum of all thin volume sizes (<1.29 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (446.12 GiB).
Creating filesystem with 268435456 4k blocks and 67108864 inodes
Filesystem UUID: 0169b069-26f3-4ac5-9798-63d27eb72059
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848
rsync: [receiver] mkstemp "/var/lib/lxc/104/.copy-volume-1/sync/moodledata/filedir/16/55/.1655b6bd88710ae2884729b3752b0fb4c2dd4858.7GClCt" failed: Input/output error (5)
rsync: [receiver] chown "/var/lib/lxc/104/.copy-volume-1/sync/moodledata/filedir/54/3b/.543b8c88e376ff59b967138289d092c1e7178b69.spNe8v" failed: Read-only file system (30)
.
.
.
Lot of similar responses
.
.
.
rsync: [generator] recv_generator: mkdir "/var/lib/lxc/104/.copy-volume-1/sync/moodledata/trashdir/fe" failed: Read-only file system (30)
*** Skipping any contents from this failed directory ***
.
.
.
Lots of similar respones
.
.
.
Number of files: 277,606 (reg: 155,813, dir: 121,793)
Number of created files: 242,257 (reg: 121,270, dir: 120,987)
Number of deleted files: 0
Number of regular files transferred: 121,270
Total file size: 458,050,011,039 bytes
Total transferred file size: 326,368,959,073 bytes
Literal data: 326,368,959,073 bytes
Matched data: 0 bytes
File list size: 11,925,531
File list generation time: 0.143 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 326,464,648,306
Total bytes received: 2,989,621

sent 326,464,648,306 bytes  received 2,989,621 bytes  134,653,593.70 bytes/sec
total size is 458,050,011,039  speedup is 1.40
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
  Logical volume "vm-104-disk-1" successfully removed
  Logical volume "vm-104-disk-0" successfully removed
TASK ERROR: clone failed: command 'rsync --stats -X -A --numeric-ids -aH --whole-file --sparse --one-file-system '--bwlimit=0' /var/lib/lxc/104/.copy-volume-2/ /var/lib/lxc/104/.copy-volume-1' failed: exit code 23

I can fully (both roofs and mp0) backup the full CT with no problems to another storage (NFS mounted) and can snapshot it, but has this issue with clone.

I guess that the GUI clone command tries to clone both mount points at the same thin storage and they need to be replicated at they own storage as the mp0 is a big one and don't fit at the rootfs storage.

Any help will be appreciated
 
Hi,

the error says "Read-only file system". How is your "Raid5" storage setup, can you post your /etc/pve/storage.conf?
 
Hi, of course:

Code:
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

lvmthin: Raid5
        thinpool Raid5
        vgname Raid5
        content images,rootdir
        nodes pve

nfs: NAS-BackupProxmox
        export /volume1/BackupProxmox
        path /mnt/pve/NAS-BackupProxmox
        server 172.16.1.241
        content iso,rootdir,backup,images,vztmpl,snippets
        prune-backups keep-all=1
 
The raid5 storage are 4 mechanical 4TB HDs installed at the server with a Hardware Raid card. They appear at the system as another disk drive like the local and local-lvm that are in a two SSD mirror.
 
ah sorry I'm blind apparently :D. This happens when the src & destination filesystems are both mounted to copy over the files. According to the error for some reason the destination file system is mounted read only.

Do you have any other containers on Raid5 that you can try to clone? It would be interesting if this happens with other containers as well.
 
Hi, I’m shure the file system at mp0 is writable. It contains the data directory for a Moodle system that is in production so it must be writable.
 
The problem is that the destination is apparently read-only for some reason. That is why I suggested testing what happens with another VM on the same storage :).
 
do you have enough free space in your thin pool? anything in the system journal at the time of the clone?
 
  • Like
Reactions: shrdlicka

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!