I use Nextcloud (NextcloudPi) natively running in a old i7 gen3 laptop but now I am moving everything to a PVE i7 gen10 machine.
I started a test VM with Debian 10 with just a few GB thinking to expand later but now I need to increase the size of the partition and I can't find the solution to increase my sda1 partition to max here is my output:
I was able to use the Proxmox UI to change the size using the resize option in Hardware > Hard disk ... i understand that this is not automatically reflected in the VM and there are aditional steps to take inside the VM.
I can't find any solution which works
Thanks in advance to anyone who can shed some light to this newbie who is loving Proxmox
I started a test VM with Debian 10 with just a few GB thinking to expand later but now I need to increase the size of the partition and I can't find the solution to increase my sda1 partition to max here is my output:
I was able to use the Proxmox UI to change the size using the resize option in Hardware > Hard disk ... i understand that this is not automatically reflected in the VM and there are aditional steps to take inside the VM.
I can't find any solution which works
Thanks in advance to anyone who can shed some light to this newbie who is loving Proxmox
Code:
root@nextcloudpi:/# lsblk /dev/sda
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 816G 0 disk
├─sda1 8:1 0 15G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 975M 0 part [SWAP]
Code:
root@nextcloudpi:/# fdisk -l /dev/sda
Disk /dev/sda: 816 GiB, 876173328384 bytes, 1711276032 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x90a50d67
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 31553535 31551488 15G 83 Linux
/dev/sda2 31555582 33552383 1996802 975M 5 Extended
/dev/sda5 31555584 33552383 1996800 975M 82 Linux swap / Solaris
Code:
root@nextcloudpi:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.9G 0 2.9G 0% /dev
tmpfs 597M 8.2M 588M 2% /run
/dev/sda1 15G 14G 316M 98% /
tmpfs 3.0G 0 3.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
tmpfs 597M 0 597M 0% /run/user/0
tmpfs 597M 0 597M 0% /run/user/1000