TASK ERROR: rbd error: received interrupt

Hi,
please check the status of your Ceph storage: ceph -s as well as the system log/journal from around the time the issue occurs. Please share the ouput of pveversion -v, the full start task log, the VM configuration qm config 120 and the storage configuration cat /etc/pve/storage.cfg
 
root@pve:~# ceph -s
cluster:
id: 6db86540-d2e5-41bd-b59e-7c74eda458be
health: HEALTH_WARN
mon pve is low on available space
1/3 mons down, quorum pve,pve2
1 osds down
1 host (1 osds) down
Reduced data availability: 33 pgs inactive
Degraded data redundancy: 80592/120888 objects degraded (66.667%), 33 pgs degraded, 33 pgs undersized

services:
mon: 3 daemons, quorum pve,pve2 (age 20h), out of quorum: pve3
mgr: pve(active, since 20h)
osd: 3 osds: 1 up (since 20h), 2 in (since 2d)

data:
pools: 2 pools, 33 pgs
objects: 40.30k objects, 149 GiB
usage: 144 GiB used, 135 GiB / 279 GiB avail
pgs: 100.000% pgs not active
80592/120888 objects degraded (66.667%)
33 undersized+degraded+peered

-------------------------------------------------------------------------

root@pve:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-2-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph: 18.2.2-pve1
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.0-1
proxmox-backup-file-restore: 3.2.0-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.1
pve-cluster: 8.0.6
pve-container: 5.0.10
pve-docs: 8.2.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.5
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2


-----------------------------------------------------------------


root@pve:~# qm config 120
boot: order=scsi0;ide2;net0
cores: 7
cpu: x86-64-v2-AES
ide2: local:iso/debian-8.10.0-amd64-DVD-1.iso,media=cdrom,size=3875424K
memory: 4096
meta: creation-qemu=8.1.5,ctime=1734608456
name: drbd2
net0: virtio=BC:24:11:0C:8B:7A,bridge=vmbr3,firewall=1
numa: 0
onboot: 1
ostype: l26
parent: latest1afteruptodate
scsi0: local-lvm:vm-120-disk-0,iothread=1,size=60G
scsi1: pvecool:vm-120-disk-2,iothread=1,size=1536M
scsihw: virtio-scsi-single
smbios1: uuid=ddbfcab2-9dc6-4f3c-82eb-4876fea0195b
sockets: 1
unused0: local-lvm:vm-120-disk-1
unused1: pvecool:vm-120-disk-0
unused2: pvecool:vm-120-disk-1
unused3: local-lvm:vm-120-disk-2
unused4: local-lvm:vm-120-disk-3
vmgenid: 8260959d-a0d9-47dd-8631-607df1469023


-------------------------------------------------------------------------------


root@pve:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

rbd: pvecool
content rootdir,images
krbd 1
 
root@pve:~# ceph -s
cluster:
id: 6db86540-d2e5-41bd-b59e-7c74eda458be
health: HEALTH_WARN
mon pve is low on available space
1/3 mons down, quorum pve,pve2
1 osds down
1 host (1 osds) down
Reduced data availability: 33 pgs inactive
Degraded data redundancy: 80592/120888 objects degraded (66.667%), 33 pgs degraded, 33 pgs undersized

services:
mon: 3 daemons, quorum pve,pve2 (age 20h), out of quorum: pve3
mgr: pve(active, since 20h)
osd: 3 osds: 1 up (since 20h), 2 in (since 2d)

data:
pools: 2 pools, 33 pgs
objects: 40.30k objects, 149 GiB
usage: 144 GiB used, 135 GiB / 279 GiB avail
pgs: 100.000% pgs not active
80592/120888 objects degraded (66.667%)
33 undersized+degraded+peered
Only 1 OSD is up right now, so not all data is available.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!