HI,
since a few day i do get error that indicates that my hard disk is dying.
The partition /dev/sdg1 is the disk that is allocated to the ZFSHD
to move the VM/LXC i tried to clone a stopped LXC on ZFSHD to the local-lvm
the cloning fails with a "No space left on device (28)"
My understand is that the Thinpool on the pve volumen group has enough disk space be able to move the ZFSHD VM/LXC to it.
could you advice why i do get this error message?
what is the best way to move the VM/LXC from a dying disk to my local-lvm?
thanks
since a few day i do get error that indicates that my hard disk is dying.
Dec 18 13:07:59 pve smartd[1925]: Device: /dev/sdg [SAT], 1 Offline uncorrectable sectorsThe partition /dev/sdg1 is the disk that is allocated to the ZFSHD
Code:
/etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
zfspool: ZFSHD
pool ZFSHD
content rootdir,images
mountpoint /ZFSHD
nodes pve
pbs: backupPBS
datastore backupPVE
server 127.0.0.1
content backup
fingerprint b5:49:01:f5:e6:e5:ad:8d:21:38:40:fb:77:68:96:46:c9:54:93:a6:7e:0d:46:3b:c7:64:97:33:5c:a6:a6:f7
nodes pve
prune-backups keep-all=1
username root@pam
to move the VM/LXC i tried to clone a stopped LXC on ZFSHD to the local-lvm
Code:
Header
Proxmox
Virtual Environment 7.1-8
Storage 'ZFSHD' on node 'pve'
Enabled
Yes
Active
Yes
Content
Disk image, Container
Type
ZFS
Usage
10.53% (76.23 GB of 724.10 GB)
Logs
()
create full clone of mountpoint rootfs (ZFSHD:subvol-118-disk-0)
Logical volume "vm-106-disk-0" created.
Creating filesystem with 393216 4k blocks and 98304 inodes
Filesystem UUID: 58dfb352-5e56-47ac-bc06-e5a0bbc14668
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
rsync: [receiver] write failed on "/var/lib/lxc/106/.copy-volume-1/var/log/journal/db91a8d9813048b3b0d9627361bf92a2/system@ebe07a11427749cdae31bc510bfe0c40-000000000004d62f-0005d07da3d06a3f.journal": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(378) [receiver=3.2.3]
rsync: [sender] write error: Broken pipe (32)
Logical volume "vm-106-disk-0" successfully removed
TASK ERROR: clone failed: command 'rsync --stats -X -A --numeric-ids -aH --whole-file --sparse --one-file-system '--bwlimit=0' /var/lib/lxc/106/.copy-volume-2/ /var/lib/lxc/106/.copy-volume-1' failed: exit code 11
the cloning fails with a "No space left on device (28)"
Code:
df
Dateisystem 1K-Blöcke Benutzt Verfügbar Verw% Eingehängt auf
udev 8154872 0 8154872 0% /dev
tmpfs 1637532 1524 1636008 1% /run
/dev/mapper/pve-root 98559220 10244252 83265420 11% /
Code:
pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- <931.01g 15.99g
Code:
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <794.79g 3.34 0.39
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-a-tz-- 4.00g data 58.56
vm-101-disk-0 pve Vwi-aotz-- 6.00g data 99.76
vm-114-disk-0 pve Vwi-aotz-- 2.50g data 99.51
vm-115-disk-0 pve Vwi-aotz-- 3.00g data 99.81
vm-116-disk-0 pve Vwi-aotz-- 2.50g data 99.82
vm-117-disk-0 pve Vwi-aotz-- 3.00g data 95.55
vm-119-disk-0 pve Vwi-aotz-- 3.50g data 99.79
vm-120-disk-0 pve Vwi-aotz-- <3.91g data 99.69
My understand is that the Thinpool on the pve volumen group has enough disk space be able to move the ZFSHD VM/LXC to it.
could you advice why i do get this error message?
what is the best way to move the VM/LXC from a dying disk to my local-lvm?
thanks