Formatted to ext4, now drive shows only 50 GB instead of 500 GB

Ayno

New Member
May 11, 2025
5
0
1
Hello,
I'm having an issue with my 500 GB external hard drive. I formatted it twice using the ext4 file system with GParted inside a Linux virtual machine running on my PC. The formatting process seems to go smoothly, but after some time, the drive's capacity shows 50 GB instead of the expected 500 GB.
I don't understand why this is happening. Could it be a hardware issue, a partitioning problem, something related to the virtual machine, or the file system itself?
Thanks in advance for your help!

1746982755931.png
 
Hello; thank you for your interest in my problem.

here is the return of the cmd

Code:
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT FSTYPE      LABEL MODEL
sda                            8:0    0 111.8G  0 disk                              WDC_WDS120G2G0A-00JH30
├─sda1                         8:1    0  1007K  0 part                             
├─sda2                         8:2    0   512M  0 part /boot/efi  vfat             
└─sda3                         8:3    0 111.3G  0 part            LVM2_member       
  ├─pve-swap                 253:0    0     4G  0 lvm  [SWAP]     swap             
  ├─pve-root                 253:1    0  52.7G  0 lvm  /          ext4             
  ├─pve-data_meta0           253:2    0     1G  0 lvm                               
  ├─pve-data_tmeta           253:3    0     1G  0 lvm                               
  │ └─pve-data-tpool         253:5    0  52.6G  0 lvm                               
  │   ├─pve-vm--100--disk--0 253:6    0     4M  0 lvm                               
  │   ├─pve-vm--100--disk--1 253:7    0    37G  0 lvm                               
  │   └─pve-data             253:8    0  52.6G  1 lvm                               
  └─pve-data_tdata           253:4    0  52.6G  0 lvm                               
    └─pve-data-tpool         253:5    0  52.6G  0 lvm                               
      ├─pve-vm--100--disk--0 253:6    0     4M  0 lvm                               
      ├─pve-vm--100--disk--1 253:7    0    37G  0 lvm                               
      └─pve-data             253:8    0  52.6G  1 lvm                               
sdb                            8:16   0 465.8G  0 disk                              ST500LM012_HN-M500MBB
└─sdb1                         8:17   0 465.8G  0 part            ext4             
dir: local
        path /var/lib/vz
        content backup,snippets,rootdir,images,iso,vztmpl
        prune-backups keep-all=1
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

dir: Backupdisk
        path /mnt/disque
        content backup,images
        nodes pve
        prune-backups keep-all=1
        shared 0
 
I can see the disk and that its partition is as large as it should be but I cannot see the /mnt/disque mount point.
Please also try and share this
Bash:
resize2fs /dev/sdb1
df -h
fdisk -l /dev/sdb
After thinking a bit more about it I think what happens here is that you see the size of a directory in / A.K.A local rather than the mount point of the disk/FS because it's not mounted. See if local looks the same.
I like to use chattr +i on mount points to avoid being able to use the directory in such a state. If you share blkid I can help you to get it mounted.
 
Last edited:
Looks like /dev/sdb1 is not (or no longer) mounted at /mnt/disque and therefore you see the size of the drive that hosts /mnt/disque, which is pve-root. Make sure to moint your additional drive before using it. Maybe add is_mountpoint 1 to the configuration of Backupdisk in /etc/pve/storage.cfg so Proxmox knows this is a mountpoint.
 
  • Like
Reactions: carles89
but after some time, the drive's capacity shows 50 GB instead of the expected 500 GB.
Probably after some time that external (flaky) USB drive is going into sleep/standby - then you are seeing the "local" storage on that mountpoint.
 
  • Like
Reactions: leesteken
Initially, however, it is correctly mounted on the mounting point, as it displays its normal size. After a few hours, however, it is reduced in size.

Code:
root@pve:~# resize2fs /dev/sdb1
df -h
fdisk -l /dev/sdb
resize2fs 1.46.5 (30-Dec-2021)
Please run 'e2fsck -f /dev/sdb1' first.

Filesystem            Size  Used Avail Use% Mounted on
udev                  3.9G     0  3.9G   0% /dev
tmpfs                 787M  1.1M  786M   1% /run
/dev/mapper/pve-root   52G   38G   13G  76% /
tmpfs                 3.9G   46M  3.8G   2% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/sda2             511M  336K  511M   1% /boot/efi
/dev/fuse             128M   16K  128M   1% /etc/pve
tmpfs                 787M     0  787M   0% /run/user/0
Disk /dev/sdb: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: USB 2.0 Drive  
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x70b8301a

Device     Boot Start       End   Sectors   Size Id Type
/dev/sdb1        2048 976768064 976766017 465.8G  7 HPFS/NTFS/exFAT

I don't know what to do, honestly I'm lost.
 
I don't know what to do, honestly I'm lost.
See my above post.

It appears you are running that drive on a USB 2.0 interface connection - that is probably not going to work very well. You could start by trying a different USB port/USB enclosure.

Also you may be helped with trying to manipulate the Power management for that drive/enclosure/port.
 
  • Like
Reactions: leesteken
Ok.



I redid the /mnt/disque mount point and added it to this file.

1746993490296.png

The first time I put this line here
Code:
UUID=74cfab85-ff8b-4365-b13f-21036d2aafd2 /mnt/disque ext4 defaults 0 2
But it prevented Proxmox from starting up, probably because the disk wasn't available.

The mount point has probably been skipped, since when you reassemble it, “Backupdisk” reappears as before.



I'll wait a few hours and keep you posted.
Thank you for your interest in my post.
 
Last edited: