I created a PVE setup with a Raid Z created from 3- 8tb sata hard drives on an HP Z620 Workstation. I am very new to Proxmox PVE but not new to IT. i THEN CREATED MY FIRST VM consisting of Linux Mint, 4 of the 16 cores available and 8 of the available 32 gigs. I created a 15tb disk. Everything went fine until I intalled inux Mint 19.2 and rebooted. At this point I received I/O errors. Aft to recreate with er researchingth e problem, I found a post that said to recreate the VM with a 200 gig hard drive then increase it afterwards. This worked fine until i went into gparted in the Linux on the VM. I tried to create a partition in the 15TB unused space but it keeps failing. I tried with gparted and disks. Then I tried by just making a 2 TB partion. Niether option worked. I am a little vague on the file system zfs and Raid Z I am stuck, Please any help would be appreciated. The first VM was intended for my Plex server and that is why i needed the 15 TB of disk space. I originall used full space available on the 3 8 TB drives. The HP does support hardware RAID, but I have it turned off in the bios. Please any help would be so appreciated.
Dusty Lunn
techie716@gmail.com
I have included some pertinent information below.
Proxmox PVE Virtual Environment 6.0-4
Z620 HP WORKSTATION
3-8 TB SEAGATE EXOS SATA DRIVES
32 GIG ECC MEMORY
XEON E5 2690 PROCESSOR 8 CORE WITH HYPERTHREADING
Virtual Environment 6.0-4
Node 'pve'
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve) pve-manager: 6.0-4 (running version: 6.0-4/2a719255) pve-kernel-5.0: 6.0-5 pve-kernel-helper: 6.0-5 pve-kernel-5.0.15-1-pve: 5.0.15-1 ceph-fuse: 12.2.11+dfsg1-2.1 corosync: 3.0.2-pve2 criu: 3.11-3 glusterfs-client: 5.5-3 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.10-pve1 libpve-access-control: 6.0-2 libpve-apiclient-perl: 3.0-2 libpve-common-perl: 6.0-2 libpve-guest-common-perl: 3.0-1 libpve-http-server-perl: 3.0-2 libpve-storage-perl: 6.0-5 libqb0: 1.0.5-1 lvm2: 2.03.02-pve3 lxc-pve: 3.1.0-61 lxcfs: 3.0.3-pve60 novnc-pve: 1.0.0-60 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.0-5 pve-cluster: 6.0-4 pve-container: 3.0-3 pve-docs: 6.0-4 pve-edk2-firmware: 2.20190614-1 pve-firewall: 4.0-5 pve-firmware: 3.0-2 pve-ha-manager: 3.0-2 pve-i18n: 2.0-2 pve-qemu-kvm: 4.0.0-3 pve-xtermjs: 3.13.2-1 qemu-server: 6.0-5 smartmontools: 7.0-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.1-pve1
Dusty Lunn
techie716@gmail.com
I have included some pertinent information below.
Proxmox PVE Virtual Environment 6.0-4
Z620 HP WORKSTATION
3-8 TB SEAGATE EXOS SATA DRIVES
32 GIG ECC MEMORY
XEON E5 2690 PROCESSOR 8 CORE WITH HYPERTHREADING
Virtual Environment 6.0-4
Node 'pve'
CPU usage 0.83% of 16 CPU(s) | IO delay 0.12% |
Load average 0.25,0.23,0.20 | |
RAM usage 61.20% (19.18 GiB of 31.34 GiB) | KSM sharing 1.32 GiB |
HD space(root) 0.00% (971.88 MiB of 21.10 TiB) | SWAP usage N/A |
CPU(s) 16 x Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz (1 Socket) | |
Kernel Version Linux 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 Jul 2019 10:51:57 +0200) | |
PVE Manager Version pve-manager/6.0-4/2a719255 |
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve) pve-manager: 6.0-4 (running version: 6.0-4/2a719255) pve-kernel-5.0: 6.0-5 pve-kernel-helper: 6.0-5 pve-kernel-5.0.15-1-pve: 5.0.15-1 ceph-fuse: 12.2.11+dfsg1-2.1 corosync: 3.0.2-pve2 criu: 3.11-3 glusterfs-client: 5.5-3 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.10-pve1 libpve-access-control: 6.0-2 libpve-apiclient-perl: 3.0-2 libpve-common-perl: 6.0-2 libpve-guest-common-perl: 3.0-1 libpve-http-server-perl: 3.0-2 libpve-storage-perl: 6.0-5 libqb0: 1.0.5-1 lvm2: 2.03.02-pve3 lxc-pve: 3.1.0-61 lxcfs: 3.0.3-pve60 novnc-pve: 1.0.0-60 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.0-5 pve-cluster: 6.0-4 pve-container: 3.0-3 pve-docs: 6.0-4 pve-edk2-firmware: 2.20190614-1 pve-firewall: 4.0-5 pve-firmware: 3.0-2 pve-ha-manager: 3.0-2 pve-i18n: 2.0-2 pve-qemu-kvm: 4.0.0-3 pve-xtermjs: 3.13.2-1 qemu-server: 6.0-5 smartmontools: 7.0-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.1-pve1