Resizing Linux VM partitions

techie716

New Member
Nov 26, 2019
1
0
1
65
I created a PVE setup with a Raid Z created from 3- 8tb sata hard drives on an HP Z620 Workstation. I am very new to Proxmox PVE but not new to IT. i THEN CREATED MY FIRST VM consisting of Linux Mint, 4 of the 16 cores available and 8 of the available 32 gigs. I created a 15tb disk. Everything went fine until I intalled inux Mint 19.2 and rebooted. At this point I received I/O errors. Aft to recreate with er researchingth e problem, I found a post that said to recreate the VM with a 200 gig hard drive then increase it afterwards. This worked fine until i went into gparted in the Linux on the VM. I tried to create a partition in the 15TB unused space but it keeps failing. I tried with gparted and disks. Then I tried by just making a 2 TB partion. Niether option worked. I am a little vague on the file system zfs and Raid Z I am stuck, Please any help would be appreciated. The first VM was intended for my Plex server and that is why i needed the 15 TB of disk space. I originall used full space available on the 3 8 TB drives. The HP does support hardware RAID, but I have it turned off in the bios. Please any help would be so appreciated.

Dusty Lunn
techie716@gmail.com


I have included some pertinent information below.


Proxmox PVE Virtual Environment 6.0-4
Z620 HP WORKSTATION
3-8 TB SEAGATE EXOS SATA DRIVES
32 GIG ECC MEMORY
XEON E5 2690 PROCESSOR 8 CORE WITH HYPERTHREADING



Virtual Environment 6.0-4



Node 'pve'










CPU usage

0.83% of 16 CPU(s)


IO delay

0.12%


Load average

0.25,0.23,0.20
RAM usage

61.20% (19.18 GiB of 31.34 GiB)


KSM sharing

1.32 GiB
HD space(root)

0.00% (971.88 MiB of 21.10 TiB)


SWAP usage

N/A


CPU(s)

16 x Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz (1 Socket)
Kernel Version

Linux 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 Jul 2019 10:51:57 +0200)
PVE Manager Version

pve-manager/6.0-4/2a719255












proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve) pve-manager: 6.0-4 (running version: 6.0-4/2a719255) pve-kernel-5.0: 6.0-5 pve-kernel-helper: 6.0-5 pve-kernel-5.0.15-1-pve: 5.0.15-1 ceph-fuse: 12.2.11+dfsg1-2.1 corosync: 3.0.2-pve2 criu: 3.11-3 glusterfs-client: 5.5-3 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.10-pve1 libpve-access-control: 6.0-2 libpve-apiclient-perl: 3.0-2 libpve-common-perl: 6.0-2 libpve-guest-common-perl: 3.0-1 libpve-http-server-perl: 3.0-2 libpve-storage-perl: 6.0-5 libqb0: 1.0.5-1 lvm2: 2.03.02-pve3 lxc-pve: 3.1.0-61 lxcfs: 3.0.3-pve60 novnc-pve: 1.0.0-60 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.0-5 pve-cluster: 6.0-4 pve-container: 3.0-3 pve-docs: 6.0-4 pve-edk2-firmware: 2.20190614-1 pve-firewall: 4.0-5 pve-firmware: 3.0-2 pve-ha-manager: 3.0-2 pve-i18n: 2.0-2 pve-qemu-kvm: 4.0.0-3 pve-xtermjs: 3.13.2-1 qemu-server: 6.0-5 smartmontools: 7.0-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.1-pve1
 
ZFS is a copy on write file system. If you did not set the storage to "thin" the whole size of the new VM disk will be reserved leaving ZFS not enough space to function properly.

Give the VM a smaller disk! If the pool still gives you problems destroy and recreate it if you just started out.
 
how did you turn off the raid controller in bios? I'm planning to get z620 for similar setup did you manage to solve your problem? and when you disabled the controller in the BIOS was it easy no issues when you created the ZFS volume in proxmox?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!