VM won't start due to io-error - how to increase disk space?

teeeeee

New Member
Feb 1, 2023
26
2
3
Hi, total beginner so please answer simply for me if possible :)

I have a 64GB SSD drive, and originally had dedicated 32GB to a VM (running HomeAssistant OS), and 10GB to an LXC container. After updating Home Assistant, the VM wouldn't start, complaining of an io-error:

image1.jpg

It seems like the "LVM-Thin" thing is full. I tried to increase the disk of the VM from 32GB.
I did this by selecting the VM --> Hardware --> Hard Disk --> Disk Action --> Resize
and it now shows as 43GB:

image2.jpg

However, the LVM-Thin still shows full. How can I fix this, to get the VM up and running, and figure out what is taking up the space? I have checked that there are no "Snapshots" listed under the VM.

Thanks
 
It appears your Lvm-Thin "data" thinpool is too small for your vm-100-disks.
You are making it even worse by trying to increase the vm-100-disks!
Don't know how you got to such a messy setup, but in principle you should have made sure that the local-lvm is large enough to contain the
Lvm-Thin "data" thinpool big enough to contain your vm-100-disks.
If you have proper (restorable) vzdump backups (external) of your VM and CT, I would start again with a fresh PVE install, making sure to allocate enough space in the local-lvm to contain the above.
Your 64gb SSD "seems" tight on your setup.

Maybe someone else here can give you more helpful advice?
 
Last edited:
  • Like
Reactions: Kingneutron
Thanks both. @jlauro how exactly can I "free up space"? Do I need to somehow log into the main pve console and do it from there? How can I see what is contained in there?

@gfngfn256 I know it seems tight, but I don't expect this VM to need more than 20GB (and the 32GB was the default for the image I'm using). The other LXC container for sure only needs 10GB. Is there some way I can increase the size of the LVM-Thin disk, so that there is more space for the Home Assistant VM?
 
@gfngfn256 I know it seems tight, but I don't expect this VM to need more than 20GB (and the 32GB was the default for the image I'm using).
Even so, your entire pool is only 18G. When you run out of space you will likely have data corruption.
Is there some way I can increase the size of the LVM-Thin disk, so that there is more space for the Home Assistant VM?
Yes, there are many ways, which one works for you depends on your environment and what is available to you.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: gfngfn256
@bbgeek17 Can you help me with what kind of information I could provide in order to help you advise further?

@jlauro The output of fdisk (on the main pve shell, since the VM won't start) is the following:

Code:
root@pve:~# fdisk -l
Disk /dev/sda: 59.63 GiB, 64023257088 bytes, 125045424 sectors
Disk model: TS64GMSA370     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4B5C1F09-5E16-43D3-9687-B152EF53AC6B

Device       Start       End   Sectors  Size Type
/dev/sda1       34      2047      2014 1007K BIOS boot
/dev/sda2     2048   1050623   1048576  512M EFI System
/dev/sda3  1050624 125045390 123994767 59.1G Linux LVM


Disk /dev/mapper/pve-swap: 7.38 GiB, 7918845952 bytes, 15466496 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 24.94 GiB, 26776436736 bytes, 52297728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-vm--100--disk--0: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
GPT PMBR size mismatch (69206015 != 83886079) will be corrected by write.
The backup GPT table is not on the end of the device.


Disk /dev/mapper/pve-vm--100--disk--1: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: AEE03013-0EAC-46EF-ACDD-FC7567E9CD96

Device                                   Start      End  Sectors  Size Type
/dev/mapper/pve-vm--100--disk--1-part1    2048    67583    65536   32M EFI System
/dev/mapper/pve-vm--100--disk--1-part2   67584   116735    49152   24M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part3  116736   641023   524288  256M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part4  641024   690175    49152   24M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part5  690176  1214463   524288  256M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part6 1214464  1230847    16384    8M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part7 1230848  1427455   196608   96M Linux filesystem
/dev/mapper/pve-vm--100--disk--1-part8 1427456 69205982 67778527 32.3G Linux filesystem


Disk /dev/mapper/pve-vm--101--disk--0: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
The backup GPT table is corrupt, but the primary appears OK, so that will be used.


Disk /dev/sdb: 223.57 GiB, 240057409536 bytes, 468862128 sectors
Disk model:  SA400S37240G   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: F9CB94C0-B87F-4FED-98DE-802144BC90EB

Device     Start       End   Sectors   Size Type
/dev/sdb1  65535 468862094 468796560 223.5G Linux filesystem
 
I see 3 main options:
1. Deleting 101 if you want to make room for 100 (hoping 100 isn't already corrupt from running out of space)
2. Adding an an additional drive, and making that available to LVM to grow into (also subject to possible corruption issue from running out space)
3. Rebuild making a smaller / (root) partition for more is available to vms, and restoring the vms from backup.
 
  • Like
Reactions: bbgeek17
@jlauro
Your option 2) : I have an additional external drive connected (listed above as sdb, with 240GB space). Is it possible to use this as your suggested "additional drive", as a temporary solution to allow me to boot the VM? As I mentioned, when running normally, this VM should only need around 20GB space, and something has gone wrong to make the disk full. At the moment, I just need to figure out what has eaten the space, so need a way to boot it.

Your option3): Could you elaborate for me exactly in more detail what this means, and what it would entail? What do you mean by "rebuild", and which "root partition" are you referring to? Thanks for your patience here, I am complete beginner.
 
Last edited:
@jlauro and @bbgeek17

I managed to get the VM to start by doing the following:
Rich (BB code):
rm -f /run/lock/qemu-server/lock-100
qm stop 100
lvresize -L +1G /dev/pve/data
qm start 100

This increased the size so that it is now only almost full, allowing the VM to start:

image3.jpg


However, running df on the VM's shell console doesn't really seem to show that anything is full:

image5.jpg

What am I missing here? How can I figure out why this LVM-Thin is at 95%?

Does this make sense to anyone?

Thank you
 

Attachments

  • image4.jpg
    image4.jpg
    204.6 KB · Views: 3
It looks like your disk /dev/sdb is already formatted. I am assuming that would be your proxmox backups? If it's empty, it could be used, but it would have to be reformatted. Are you sure /dev/sdb is not already being used by something? what does
df /dev/sdb1
report?

As to where your space went:
You have 59.6G instead of 64GB (lost from overhead and over 7.4% of base 10 vs base 2 in how OS vs drive manufacturers count).

7.3G Swap space, part of supporting proxmox /dev/mapper/pve-swap: 7.38 GiB
10G for for vm 101 dev/mapper/pve-vm--101--disk--0: 10 GiB
24.94G for main proxmox install /dev/mapper/pve-root: 24.94
59.6-7.3-10-24.94 = 17.36 left is what's left that you are trying to fit the 40GB vm 100 into, and it doesn't fit (even if it's only 20gb used).
 
  • Like
Reactions: teeeeee
When things are "thin", that means you are telling the system (vm) below that it has more space available then there really is. That can be good because typically systems leave about 30% empty, and if you have a lot of vms, that have a lot wasted space adds up. However, when all the vms actually use closer to what you created them as, if the host system runs out of space bad things happen. The guest thinks it has lots of free storage, but the host ran out. So, when the vm tries to write something new, it can't actually save it because the host ran out of available storage.
 
It looks like your disk /dev/sdb is already formatted. I am assuming that would be your proxmox backups? If it's empty, it could be used, but it would have to be reformatted. Are you sure /dev/sdb is not already being used by something? what does
df /dev/sdb1
report?
@jlauro Yes you are right, I am indeed using it for some data storage (it currently uses only around 1GB though):
Code:
df /dev/sdb1 -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       219G  1.1G  207G   1% /mnt/SSD_240GB

Thanks very much for explaining for me explicitly what is going on with the storage allocation, that was very helpful and makes much more sense to me now. Your explanation raises a couple of questions for me now, if you can help answer? :

1) I didn't realise main Proxmox install uses 25 GB, so that explains a lot. However, I can see that it is currently only using around 5GB:
/dev/mapper/pve-root 25G 5.1G 19G 22%
I guess when you said earlier "Rebuild making a smaller / (root) partition", this is what you were referring to? Would is make sense for example to reduce this from 25GB to say 16GB, and would this need to be set somehow during a complete fresh install?

2) You said I have "59.6-7.3-10-24.94 = 17.36 GB" left for the VM. This I understand. However, when I first created this VM it was allocated 32GB, which I now understand is too bit to fit. But! the VM was created somehow without issue at the time. I assume this is due to the nature of the "thin" storage that you talked about? I can see on the output of df on the VM 100 that this is only using around 8GB of space. So from your explanation of thin storage above, I would expect that it is fine to allocated 32GB or 40GB to this VM, as long as it doesn't actually use in reality more than the 17GB remaining on the main drive?

Do I understand things right?
 
1) I didn't realise main Proxmox install uses 25 GB, so that explains a lot. However, I can see that it is currently only using around 5GB:
/dev/mapper/pve-root 25G 5.1G 19G 22%
I guess when you said earlier "Rebuild making a smaller / (root) partition", this is what you were referring to? Would is make sense for example to reduce this from 25GB to say 16GB, and would this need to be set somehow during a complete fresh install?

2) You said I have "59.6-7.3-10-24.94 = 17.36 GB" left for the VM. This I understand. However, when I first created this VM it was allocated 32GB, which I now understand is too bit to fit. But! the VM was created somehow without issue at the time. I assume this is due to the nature of the "thin" storage that you talked about? I can see on the output of df on the VM 100 that this is only using around 8GB of space. So from your explanation of thin storage above, I would expect that it is fine to allocated 32GB or 40GB to this VM, as long as it doesn't actually use in reality more than the 17GB remaining on the main drive?

Do I understand things right?
1. Yes, I think that would be best to be resized during a fresh install. There are ways it's possible to shrink partitions, mostly booting from other media such as usb or cd and then a utility to resize. It might be possible to shrink it live, but I never have and you will want to look that up in a more general linux forum. Although only 5.1GB is being used, the entire 25GB is allocated and reserved on the SSD as it's not on a "thin" LVM.

2. Yes, it sounds like you are understanding right. In general it is fine to allocate more than you have, but you need to keep track of used / free space of the lvm pool so it doesn't run out.
 
@jlauro can you help me with this? Particularly 2) above. Thank you so much !

If you want to use your external disk, you will want to first make backups of everything if you haven't already... especially that external drive. Also, be advised that mixing drive types (ie: ssd and a slower usb drive) in the same group isn't best practice and if either drive fails, essentially all fail...
That said, if you want to do that anyways...
You will then need to delete the current partition on that drive, and then set the drive (or a partition on that drive if you don't want to use the entire drive), and then use vgextend to extend the pve volumegroup.
You will not need to resize the filesystem, but not sure about the logical volume as I haven't did this with proxmox or thin volumes before. (Looking at my google search results, you will probably want to lvextend /dev/pve/data)
You should read a few guides on LVM and increasing storage. Here is one that came up on a google search:
https://kenmoini.com/post/2018/10/quick-n-dirty-adding-disks-to-proxmox/
(but I think it's slightly off as it was for an older version of proxmox)
 
  • Like
Reactions: Kingneutron
As I said above, If I were you, make backups of everything, then learn & understand what local & local-lvm storages mean to Proxmox.

In brief;

"local" can only store ISOs/Backups/Snippets/Templates as well as your root-filesystem of the PVE OS and all other files,
"local-lvm" can only store the disks of VMs/LXCs.

then do a fresh install, allocating what YOU NEED for your system.

Something to start you off learning: https://pve.proxmox.com/wiki/Installation#advanced_lvm_options

Good luck & happy learning.
 
Last edited:
@jlauro and @gfngfn256

Thanks both, there is a lot of really helpful advice here, and I will do as you say and learn more about Proxmox storage.

My main source of confusion is the following:

I have 59.6G total to work with.
- 7.3G goes on SWAP
- 24.9G goes to Proxmox root

So should leave 27.4G to work with.

I have x1 LXC container, which has allocated 10G, and one VM which has allocated 32G. So for sure this amounts to more than the 27G available. However, on inspection, the LXC container only uses 3.3G out of the allocated 10G, and the VM only uses 9G out of the allocated 32G. Since these are both "thin" type storage on the local-lvm, I would expect they only require together 3.3G+9G =12.3G on disk. Since this is much less than the 27.4G truly available, I expected to be fine.

The reason I make this point, is because I don't believe I should need to follow any of the above suggestions regarding either 1) doing a fresh install to reduce Proxmox boot space, or 2) adding any additional storage drives.

e.g

then do a fresh install, allocating what YOU NEED for your system.

I see 3 main options:
1. Deleting 101 if you want to make room for 100 (hoping 100 isn't already corrupt from running out of space)
2. Adding an an additional drive, and making that available to LVM to grow into (also subject to possible corruption issue from running out space)
3. Rebuild making a smaller / (root) partition for more is available to vms, and restoring the vms from backup.
 
So for sure this amounts to more than the 27G available. However, on inspection
As a rule: Don't create a situation of a thin pool that in reality doesn't have the allocated space. AFAIK the VM's and LXC's allocated aren't aware of your limitations. For example a Home Assistant VM when updating itself, can (temporarily) far exceed the "normal" disk usage you now see. You are asking for trouble.

24.9G goes to Proxmox root
On your limited 59GB system, why do you need this amount (almost 50%) for root?
 
Last edited:
For example a Home Assistant VM when updating itself, can (temporarily) far exceed the "normal" disk usage you now see.
This is a good point, and may in fact be the reason for the system refusing to boot after an update. I will bear this in mind for the future.


As a rule: Don't create a situation of a thin pool that in reality doesn't have the allocated space.
I thought that the ability to do this was the whole point of "thin" storage? Now maybe I am misunderstanding...


On your limited 59GB system, why do you need this amount (almost 50%) for root?
As far as I understand, Proxmox by default allocates around 50% of the total available disk space for the pve-root installation (which is of the non-Thin type). I was not aware of this when first doing the installation, and as such this is why it is currently taking up 50%.

Having said this, after reading around, it seems that it is possible to set the root partition's disk space when doing the initial install. However, there is inconclusive information online about what the minimum storage space needs to be. I suppose I could reinstall Proxmox, and use something smaller, but I am not sure what would be safe (maybe 16GB? or even less?).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!