[SOLVED] No more space on VM

Result of the last command
Code:
resize2fs 1.46.5 (30-dec-2021)
The filesystem is already 2621440 (4k) blocks long. Nothing to do!
 
Firstly I hope you have backups.

Secondly it appears you twice added (tried?) more space to that virtual drive. I don't know what went on when you did that. But what I would try (if possible) is shutdown the whole node & then restart.

What is the health of the Storage PVE (where this virtual drive belongs) within the host. Is this ZFS? What is the health of the ZFS?
 
Check ZFS health with zpool status

Do you have any snapshots/backups/timeshift using space on that Virtual Drive?

The easiest what I would do - is backup all the critical VM data to a different storage (maybe add another virtual disk to the VM?) - and then create a new VM with the required larger size & copy over that data (by importing that added VM virtual disk?).
 
Zpool status on host :
Code:
 zpool status
  pool: tank
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 19:16:29 with 0 errors on Sun Jul 14 19:40:31 2024
config:

        NAME                                                     STATE     READ WRITE CKSUM
        tank                                                     ONLINE       0     0     0
          raidz1-0                                               ONLINE       0     0     0
            ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0XSNJH8             ONLINE       0     0     0
            ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K3JHXTUF             ONLINE       0     0     0
            ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E5NC6U41             ONLINE       0     0     0
            ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K5JYK4FP             ONLINE       0     0     0
        logs
          mirror-1                                               ONLINE       0     0     0
            ata-Samsung_SSD_850_EVO_500GB_S3R3NB0J939519V-part3  ONLINE       0     0     0
            ata-Samsung_SSD_850_EVO_500GB_S3R3NB0J939519V-part4  ONLINE       0     0     0
        cache
          ata-Samsung_SSD_850_EVO_500GB_S3R3NB0J939519V-part5    ONLINE       0     0     0
          ata-Samsung_SSD_850_EVO_500GB_S3R3NB0J939519V-part6    ONLINE       0     0     0

errors: No known data errors

I've other VM/CT on the same VirtualSpace, but no backup or snapshots.
 

Attachments

  • 2024-07-29 16_09_13-Proxmox - Proxmox Virtual Environment – Brave.png
    2024-07-29 16_09_13-Proxmox - Proxmox Virtual Environment – Brave.png
    89.6 KB · Views: 1
  • 2024-07-29 16_09_24-Proxmox - Proxmox Virtual Environment – Brave.png
    2024-07-29 16_09_24-Proxmox - Proxmox Virtual Environment – Brave.png
    39.4 KB · Views: 1
  • 2024-07-29 16_09_49-Proxmox - Proxmox Virtual Environment – Brave.png
    2024-07-29 16_09_49-Proxmox - Proxmox Virtual Environment – Brave.png
    58.1 KB · Views: 1
I still don't know the PVE(Proxmox) named Storage location. What does zpool list show? What does zfs list show? Please also show image for PVE, Summary.

Also show output from host of cat /etc/pve/storage.cfg & qm config 113

I notice the last scrub was done on Jul 14, this is 5 days before your initial post. So you may want to (eventually) do another scrub to actually see if it is in fact healthy.
 
Upon researching your issue, maybe try this instead:

sudo lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
 
Zpool list result :
Code:
zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank  14.5T  13.2T  1.33T        -         -    35%    90%  1.00x    ONLINE  -
Code:
zfs list
NAME                         USED  AVAIL  REFER  MOUNTPOINT
tank                        9.68T   723G   198K  /tank
tank/Backup                  344G   723G   136G  /tank/Backup
tank/Downloads               261G   723G   260G  /tank/Downloads
tank/Movies                 7.25T   723G  7.25T  /tank/Films
tank/Owncloud                289G   723G   271G  /tank/Owncloud
tank/TvShows                1.30T   723G  1.09T  /tank/TvShows
tank/pve                     253G   723G   672M  /tank/pve
tank/pve/subvol-101-disk-4  30.1G  71.8G  18.2G  /tank/pve/subvol-101-disk-4
tank/pve/subvol-110-disk-0  2.50G  6.08G  1.92G  /tank/pve/subvol-110-disk-0
tank/pve/subvol-112-disk-1   953K  8.00G   140K  /tank/pve/subvol-112-disk-1
tank/pve/subvol-112-disk-2  2.68G  10.5G  2.46G  /tank/pve/subvol-112-disk-2                                                                                "Proxmox" 19:58 29-juil.-24
tank/pve/vm-100-disk-0      97.5G   771G  29.1G  -
tank/pve/vm-102-disk-0      6.44M   723G  81.4K  -
tank/pve/vm-102-disk-1      50.1G   740G  24.7G  -
tank/pve/vm-113-disk-0      69.4G   769G  12.5G  -
Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content rootdir,iso,snippets,vztmpl,images,backup
        prune-backups keep-all=1
        shared 0

dir: Backup
        path /tank/Backup
        content iso,rootdir,backup,images,vztmpl,snippets
        prune-backups keep-all=1
        shared 0

zfspool: PVE                                                                                                                                                "Proxmox" 19:58 29-juil.-24
        pool tank/pve
        content rootdir,images
        sparse 0
Code:
 qm config 113
boot: order=scsi0;ide2;net0
cores: 2
description: <div align='center'><a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'><img src='https%3A//cdn.icon-icons.com/icons2/2699/PNG/512/nextcloud_logo_icon_168948.png'/></a>%0A%0A  # Nextcloud
ide2: none,media=cdrom
memory: 2048
name: Nextcloud
net0: virtio=D6:0B:54:8A:C6:EF,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: PVE:vm-113-disk-0,size=45G
scsihw: virtio-scsi-pci                                                                                                                                     "Proxmox" 19:58 29-juil.-24
smbios1: uuid=8088c7ec-93e7-4c4f-97a2-7d14aecd0129
sockets: 2
tags: nextcloud
vmgenid: cab13127-0797-42d3-bdfc-730e23c7c7cd
And result of the command lvresize
 

Attachments

  • 2024-07-29_19-57.png
    2024-07-29_19-57.png
    36.6 KB · Views: 2
OK so that worked (partially). Now you need to resize the FS with:

sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

& check again with df -h

Also show output for lsblk

Maybe try shutting down & restarting VM & repeating to get the rest of that space.
 
The result is better, a little bit of free space, but still not all.
 

Attachments

  • 2024-07-29_20-52.png
    2024-07-29_20-52.png
    61.5 KB · Views: 1
Maybe try shutting down & restarting VM & repeating to get the rest of that space.
AFTER shutting down & restarting VM, "repeating" means all of the above commands:

Code:
sudo pvresize /dev/sda3
sudo lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
 
  • Like
Reactions: lilp
It's works.
Thanks a lot for your help.
 

Attachments

  • brave_uYZkk7nFPx.png
    brave_uYZkk7nFPx.png
    52.6 KB · Views: 1
Happy its all working in the end. I imagine that enlarging the virtual disk several times before dealing internally with the VM's PV/LV caused this.

Maybe tag prefix the thread-title with [SOLVED], (upper right hand corner under title).
 
  • Like
Reactions: lilp

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!