io-error full disk?

steve319

New Member
Nov 21, 2025
7
0
1
Hello everyone,

I took what I'm finding to be a deep dive into Proxmox a few weeks ago. I have two VMs set up, one for home assistant (running fine) and one for Umbrel.

I installed proxmox on a 500gb USB connected SSD, then I have a new 2TB m.2 plugged into the motherboard.

Home assistant is installed on a VM on the 500GB disk and I have assigned it 100GB, and I assigned the whole 2tb m.2 to the umbrel install to run a BTC node. All has been working fine until a few days ago and I am getting a yellow triangle on the Unbrel VM with an io-error. I have been reading this and other forums trying to work it out. I'll be honest, I am not sure where to look for the logs, but after all my reading it might be becasue the 2TB m.2 is 100% full. I cannot reboot the VM indiviually, and I have rebooted proxmox but I still see the error.

Unbrel was using about 900GB of the 2TB available. I want it to have as much of the disk as possible, but maybe I haven't left enough space for proxmox to do anything it needs on the drive, hence the error?

From my reading I have changed the aoi on the HDD to 'native' but that hans't helped.

I don't mind removing the Umbrel VM and starting again, but I want to make sure it doesn't happen again.

Any help appreciated,
Cheers
 

Attachments

  • prox1.jpg
    prox1.jpg
    77.1 KB · Views: 6
  • prox2.jpg
    prox2.jpg
    24.9 KB · Views: 6
  • prox3.jpg
    prox3.jpg
    71 KB · Views: 4
Last edited:
I want to make sure it doesn't happen again
Don't allocate too much storage to a virtual disk. GB vs GiB. Don't use directory storage. Use QCOW2 if you must. Use discard. Set up warnings if storage gets too low.

I don't usually work with .raw disk files on a Directory storage so I don't have many good ideas on how to fix this.
I suppose you could change the Reserved blocks with something like tune2fs -m 1 /dev/yourpartition and then try to fstrim as explained above.
Check lsblk -o+FSTYPE,LABEL,MODEL to find the right partition. I'd very much recommend to switch to a different type of storage. Either ZFS (my favorite) or LVM-Thin. Changing the blocks is just a temporary (and one time) fix. Do not keep the VM running if fstrim doesn't work.
 
Last edited:
Thanks for this, I will take a look now. I set up 2TB becasue apparently that will be good for a few years running the Bitcoin node as the current node size is about 900GB, so alloocating the full 2TB of the m.2 I thought was a good idea.

Thanks for the reply, and I will see what I can figure with the info you have provided.

Cheers
 
Don't allocate too much storage to a virtual disk. GB vs GiB. Don't use directory storage. Use QCOW2 if you must. Use discard. Set up warnings if storage gets too low.

I don't usually work with .raw disk files on a Directory storage so I don't have many good ideas on how to fix this.
I suppose you could change the Reserved blocks with something like tune2fs -m 1 /dev/yourpartition and then try to fstrim as explained above.
Check lsblk -o+FSTYPE,LABEL,MODEL to find the right partition. I'd very much recommend to switch to a different type of storage. Either ZFS (my favorite) or LVM-Thin. Changing the blocks is just a temporary (and one time) fix. Do not keep the VM running if fstrim doesn't work.
Thanks for the reply earlier. I have taken a look at the links you sent earlier, but to be honest I don't really understand them.

How do I go about setting up Umbrel on the 2TB m.2, using the other file systems you suggest, for its sole use without it running out of space and causing the error? Umbrel was using about 900GB of the 2TB, although it was showing as 100% usage.
 
There's lots of ways and variables thus I can only give you some hints here and you have to fill/research the blanks. I have no idea what you tried and which commands and steps you followed. What is unclear about what I linked?
You'd have to wipe the existing storage and then go to node > Disks > ... to format it with ZFS or LVM-Thin but first you'd need a backup of what's on it.
 
Last edited:
Thanks a lot. I managed to dismount and wipe the 2TB m.2 drive. I then added it again using the LVM-thin disk option. I have only assigned 1536GiB this time rather than the full 2TB. I will follow your link above about setting a warning about full disk space. Hopfully this time it will be OK. Can I send any screen shots so you clever people can verify?
 
Last edited:
Sure. Share all the pictures and configs you want and I'll let you know if I have some suggestions.
 
What would you recommend I share? I'm guessing my previous issue was becasue the drive was 100% assigned, even though it wasn't full?
 
Here's a few things which might be of interest
Bash:
qm config 100
qm config 101
cat /etc/pve/storage.cfg
lsblk -o+FSTYPE,LABEL,MODEL
pvs
vgs
lvs
You can over allocate a thin provisioned storage you're careful. Doing that with .raw files though isn't necessary a good idea.
 
Last edited:
Yep. One command after another. Either via a SSH client (recommended) or in the Shell in the GUI.
 
Code:
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 16384
meta: creation-qemu=10.1.2,ctime=1763742990
name: UmbrelOS
net0: virtio=BC:24:11:DB:97:74,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: umbrel2tb:vm-100-disk-1,iothread=1,size=1556G
scsihw: virtio-scsi-single
smbios1: uuid=7e8d6d04-fc35-406c-aae1-7706db8ac1d9
sockets: 1
vmgenid: ec4a8dbc-0eda-4d4c-942a-bf01f06d7a31
bios: ovmf
boot: order=scsi0
cores: 2
cpu: x86-64-v2-AES
efidisk0: local-lvm:vm-101-disk-0,efitype=4m,size=4M
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=10.1.2,ctime=1762504454
name: HomeAssistant
net0: virtio=BC:24:11:7C:D1:A4,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-101-disk-1,discard=on,iothread=1,size=96G
scsihw: virtio-scsi-single
smbios1: uuid=8bd0b55d-6930-4dc2-b5d0-7ebc5c120ac9
sockets: 1
usb0: host=10c4:ea70
vmgenid: 64f0dcaf-15fa-4281-869b-384a583b6826
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

nfs: Synology-NAS
        export /volume1/Proxmox
        path /mnt/pve/Synology-NAS
        server 192.168.1.210
        content vztmpl,images,backup,snippets,iso,rootdir,import
        prune-backups keep-all=1

lvmthin: umbrel2tb
        thinpool umbrel2tb
        vgname umbrel2tb
        content images,rootdir
        nodes proxmox

NAME                             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS FSTYPE      LABEL MODEL
sda                                8:0    0 931.5G  0 disk                               Samsung SSD 860 EVO 1TB
├─sda1                             8:1    0  1007K  0 part                               
├─sda2                             8:2    0     1G  0 part             vfat             
└─sda3                             8:3    0   930G  0 part             LVM2_member       
  ├─pve-swap                     252:0    0     8G  0 lvm  [SWAP]      swap             
  ├─pve-root                     252:1    0    96G  0 lvm  /           ext4             
  ├─pve-data_tmeta               252:2    0   8.1G  0 lvm                               
  │ └─pve-data-tpool             252:4    0 793.8G  0 lvm                               
  │   ├─pve-data                 252:5    0 793.8G  1 lvm                               
  │   ├─pve-vm--101--disk--1     252:6    0    96G  0 lvm                               
  │   └─pve-vm--101--disk--0     252:7    0     4M  0 lvm                               
  └─pve-data_tdata               252:3    0 793.8G  0 lvm                               
    └─pve-data-tpool             252:4    0 793.8G  0 lvm                               
      ├─pve-data                 252:5    0 793.8G  1 lvm                               
      ├─pve-vm--101--disk--1     252:6    0    96G  0 lvm                               
      └─pve-vm--101--disk--0     252:7    0     4M  0 lvm                               
nvme0n1                          259:0    0   1.8T  0 disk             LVM2_member       CT2000P310SSD8
├─umbrel2tb-umbrel2tb_tmeta      252:8    0  15.9G  0 lvm                               
│ └─umbrel2tb-umbrel2tb-tpool    252:10   0   1.8T  0 lvm                               
│   ├─umbrel2tb-umbrel2tb        252:11   0   1.8T  1 lvm                               
│   └─umbrel2tb-vm--100--disk--1 252:13   0   1.5T  0 lvm                               
└─umbrel2tb-umbrel2tb_tdata      252:9    0   1.8T  0 lvm                               
  └─umbrel2tb-umbrel2tb-tpool    252:10   0   1.8T  0 lvm                               
    ├─umbrel2tb-umbrel2tb        252:11   0   1.8T  1 lvm                               
    └─umbrel2tb-vm--100--disk--1 252:13   0   1.5T  0 lvm                               
  PV           VG        Fmt  Attr PSize    PFree 
  /dev/nvme0n1 umbrel2tb lvm2 a--    <1.82t 376.00m
  /dev/sda3    pve       lvm2 a--  <930.00g  16.00g
  VG        #PV #LV #SN Attr   VSize    VFree 
  pve         1   5   0 wz--n- <930.00g  16.00g
  umbrel2tb   1   2   0 wz--n-   <1.82t 376.00m
  LV            VG        Attr       LSize    Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve       twi-aotz-- <793.80g                  2.44   0.32                           
  root          pve       -wi-ao----   96.00g                                                         
  swap          pve       -wi-ao----    8.00g                                                         
  vm-101-disk-0 pve       Vwi-aotz--    4.00m data             14.06                                 
  vm-101-disk-1 pve       Vwi-aotz--   96.00g data             20.19                                 
  umbrel2tb     umbrel2tb twi-aotz--   <1.79t                  2.00   0.22                           
  vm-100-disk-1 umbrel2tb Vwi-aotz--   <1.52t umbrel2tb        2.35
 
  • The virtual disk of 100 doesn't have discard as suggested earlier. It won't do much but it feels right to also also add the SSD emulation flag
  • 8G is a bit much for HAOS but if you can spare it it's fine
The rest seems fine to me.
 
Last edited: