Hi,
I'm trying to clone a standar Debian 10 container:
But I get this when issuing the clone, roofs is ok:
But MP0 goes wrong:
I can fully (both roofs and mp0) backup the full CT with no problems to another storage (NFS mounted) and can snapshot it, but has this issue with clone.
I guess that the GUI clone command tries to clone both mount points at the same thin storage and they need to be replicated at they own storage as the mp0 is a big one and don't fit at the rootfs storage.
Any help will be appreciated
I'm trying to clone a standar Debian 10 container:
But I get this when issuing the clone, roofs is ok:
Code:
create full clone of mountpoint rootfs (local-lvm:vm-108-disk-0)
Logical volume "vm-104-disk-0" created.
Creating filesystem with 5242880 4k blocks and 1310720 inodes
Filesystem UUID: 9b602fbe-a431-4730-8f6c-ea0cc22badbb
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
Number of files: 86,673 (reg: 66,096, dir: 16,489, link: 4,057, special: 31)
Number of created files: 86,671 (reg: 66,096, dir: 16,487, link: 4,057, special: 31)
Number of deleted files: 0
Number of regular files transferred: 66,077
Total file size: 3,943,235,765 bytes
Total transferred file size: 3,807,943,501 bytes
Literal data: 3,807,943,501 bytes
Matched data: 0 bytes
File list size: 2,686,827
File list generation time: 0.013 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 3,814,136,565
Total bytes received: 1,360,335
sent 3,814,136,565 bytes received 1,360,335 bytes 133,877,084.21 bytes/sec
total size is 3,943,235,765 speedup is 1.03
But MP0 goes wrong:
Code:
create full clone of mountpoint mp0 (Raid5:vm-108-disk-0)
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Logical volume "vm-104-disk-1" created.
WARNING: Sum of all thin volume sizes (<1.29 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (446.12 GiB).
Creating filesystem with 268435456 4k blocks and 67108864 inodes
Filesystem UUID: 0169b069-26f3-4ac5-9798-63d27eb72059
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
rsync: [receiver] mkstemp "/var/lib/lxc/104/.copy-volume-1/sync/moodledata/filedir/16/55/.1655b6bd88710ae2884729b3752b0fb4c2dd4858.7GClCt" failed: Input/output error (5)
rsync: [receiver] chown "/var/lib/lxc/104/.copy-volume-1/sync/moodledata/filedir/54/3b/.543b8c88e376ff59b967138289d092c1e7178b69.spNe8v" failed: Read-only file system (30)
.
.
.
Lot of similar responses
.
.
.
rsync: [generator] recv_generator: mkdir "/var/lib/lxc/104/.copy-volume-1/sync/moodledata/trashdir/fe" failed: Read-only file system (30)
*** Skipping any contents from this failed directory ***
.
.
.
Lots of similar respones
.
.
.
Number of files: 277,606 (reg: 155,813, dir: 121,793)
Number of created files: 242,257 (reg: 121,270, dir: 120,987)
Number of deleted files: 0
Number of regular files transferred: 121,270
Total file size: 458,050,011,039 bytes
Total transferred file size: 326,368,959,073 bytes
Literal data: 326,368,959,073 bytes
Matched data: 0 bytes
File list size: 11,925,531
File list generation time: 0.143 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 326,464,648,306
Total bytes received: 2,989,621
sent 326,464,648,306 bytes received 2,989,621 bytes 134,653,593.70 bytes/sec
total size is 458,050,011,039 speedup is 1.40
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
Logical volume "vm-104-disk-1" successfully removed
Logical volume "vm-104-disk-0" successfully removed
TASK ERROR: clone failed: command 'rsync --stats -X -A --numeric-ids -aH --whole-file --sparse --one-file-system '--bwlimit=0' /var/lib/lxc/104/.copy-volume-2/ /var/lib/lxc/104/.copy-volume-1' failed: exit code 23
I can fully (both roofs and mp0) backup the full CT with no problems to another storage (NFS mounted) and can snapshot it, but has this issue with clone.
I guess that the GUI clone command tries to clone both mount points at the same thin storage and they need to be replicated at they own storage as the mp0 is a big one and don't fit at the rootfs storage.
Any help will be appreciated