pve-root full and iso location

DomF

Member
Nov 7, 2017
43
3
13
52
Hi,

I've had a few problems with proxmox I'll try to best describe what's been happening:

When I started to play with proxmox 5.0 I had installed it onto a 16Gb USB stick. In my setup I have a SSD setup as a ZFS pool and recently added a second HD drive which is a second zpool. The first drive is setup with directories data/images, data/iso, data/storage. In PVE, the iso storage location is setup as type directory pointing data/iso, the other two (image, storage) were setup as type ZFS and pointing to data/image, and data/storage. The second drive and Zpool is addition storage for a fileserver.

Generally, things have been going ok. I have uploaded iso images to the iso location for when I want to setup a new virtual machine . And over the months, I have performed updates on my PVE installation. Somewhere in time the upload iso feature from the web browser stop working properly but I was still able to upload iso images by ftp.

I decided to update again recently (to PVE Kernel 4.15) however, my update was failing because I ran out of space on /dev/mapper/pve-root. It seems I only have about 3.8 Gb allocated to pve-root, so I started to delete unwanted files to free up enough space to finish the update (from pve-kernel 4.13 to 4.15) which I managed to do and now I have PVE Kernel 4.15 running. Something strange happened with my iso location firstly, my uploaded iso images have disappeared and secondly in the PVE gui although the location should be the SSD directory data/iso it however, now references space of my USB stick PVE-root partition (3.8 Gb) instead of 46 Gb available on the SSD.

I have these questions:
1. How do I increase the size of the pve-root partition from 3.8Gb to a higher value, I have 14 Gb elsewhere on the USB stick unused.

2. Why is my PVE directory for iso storage now referencing space on the pve-root partition and not on my data/iso directory on the SSD? And how to change its to reference to point back at the SSD space. Do I just delete and re-create it?

Thanks,
Dominic
 
I have these questions:
1. How do I increase the size of the pve-root partition from 3.8Gb to a higher value, I have 14 Gb elsewhere on the USB stick unused.

2. Why is my PVE directory for iso storage now referencing space on the pve-root partition and not on my data/iso directory on the SSD? And how to change its to reference to point back at the SSD space. Do I just delete and re-create it?

For considering this more details about your current configuration would be useful. The easiest way to provide this is to run

Code:
pvereport
 
Hi,

This is some of the output from pvereport (I had to trim off some sections to fit in the post):

Code:
==== general system info ====

# hostname
pve

# pveversion --verbose
proxmox-ve: 5.1-43 (running kernel: 4.15.15-1-pve)
pve-manager: 5.1-52 (running version: 5.1-52/ba597a64)
pve-kernel-4.15: 5.1-3
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-2-pve: 4.10.17-20
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-15
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-19
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-26
pve-container: 2.0-22
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-3
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9

# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.2.200 pve.fritz.box pve pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

# top -b -n 1  | head -n 15
top - 11:34:13 up 35 min,  1 user,  load average: 0.03, 0.06, 0.07
Tasks: 460 total,   2 running, 322 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.2 us,  0.1 sy,  0.0 ni, 99.3 id,  0.4 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16424024 total, 14635156 free,  1365100 used,   423768 buff/cache
KiB Swap:  1835004 total,  1835004 free,        0 used. 14711852 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    1 root      20   0   57816   7584   5304 S   0.0  0.0   0:01.81 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.15 kthreadd
    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 kworker/0:+
    6 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_+
    7 root      20   0       0      0      0 S   0.0  0.0   0:00.00 ksoftirqd/0
    8 root      20   0       0      0      0 I   0.0  0.0   0:00.40 rcu_sched
    9 root      20   0       0      0      0 I   0.0  0.0   0:00.00 rcu_bh
   10 root      rt   0       0      0      0 S   0.0  0.0   0:00.00 migration/0

Code:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                16
On-line CPU(s) list:   0-15
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             1
NUMA node(s):          1
Vendor ID:             AuthenticAMD
CPU family:            23
Model:                 1
Model name:            AMD Ryzen 7 1700 Eight-Core Processor
Stepping:              1
CPU MHz:               2552.577
CPU max MHz:           3000.0000
CPU min MHz:           1550.0000
BogoMIPS:              5987.94
Virtualization:        AMD-V
L1d cache:             32K
L1i cache:             64K
L2 cache:              512K
L3 cache:              8192K
NUMA node0 CPU(s):     0-15
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca

==== info about storage ====

# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,backup,vztmpl

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

zfspool: zfsimage
    pool data/image
    content rootdir,images
    sparse 1

zfspool: zfsstorage
    pool data/storage
    content rootdir,images
    sparse 1

dir: zfsiso
    path /data/iso
    content iso,backup,vztmpl
    maxfiles 1
    shared 0

dir: zfslocal
    path /data/local
    content rootdir,images
    maxfiles 1
    shared 0

zfspool: zfsnetwkstorage
    pool nas/netwk-storage
    content images,rootdir
    sparse 1


# pvesm status
Name                   Type     Status           Total            Used       Available        %
local                   dir     active         3546848         3342640            4324   94.24%
local-lvm           lvmthin     active         7630848               0         7630848    0.00%
zfsimage            zfspool     active       155503749       114067251        41436498   73.35%
zfsiso                  dir     active         3546848         3342640            4324   94.24%
zfslocal                dir     active         3546848         3342640            4324   94.24%
zfsnetwkstorage     zfspool     active      3770673896       451810140      3318863756   11.98%
zfsstorage          zfspool     active        89515189        48078691        41436498   53.71%

# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=3D88-BCB9 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

# findmnt --ascii
TARGET                                     SOURCE                              FSTYPE     OPTIONS
/                                          /dev/mapper/pve-root                ext4       rw,relatime,errors=remount-ro,data=ordered
|-/sys                                     sysfs                               sysfs      rw,nosuid,nodev,noexec,relatime
| |-/sys/kernel/security                   securityfs                          securityfs rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/cgroup                         tmpfs                               tmpfs      ro,nosuid,nodev,noexec,mode=755
| | |-/sys/fs/cgroup/systemd               cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
| | |-/sys/fs/cgroup/net_cls,net_prio      cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
| | |-/sys/fs/cgroup/rdma                  cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,rdma
| | |-/sys/fs/cgroup/blkio                 cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,blkio
| | |-/sys/fs/cgroup/devices               cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,devices
| | |-/sys/fs/cgroup/cpu,cpuacct           cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
| | |-/sys/fs/cgroup/memory                cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,memory
| | |-/sys/fs/cgroup/freezer               cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,freezer
| | |-/sys/fs/cgroup/perf_event            cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,perf_event
| | |-/sys/fs/cgroup/pids                  cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,pids
| | |-/sys/fs/cgroup/hugetlb               cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,hugetlb
| | `-/sys/fs/cgroup/cpuset                cgroup                              cgroup     rw,nosuid,nodev,noexec,relatime,cpuset
| |-/sys/fs/pstore                         pstore                              pstore     rw,nosuid,nodev,noexec,relatime
| |-/sys/firmware/efi/efivars              efivarfs                            efivarfs   rw,nosuid,nodev,noexec,relatime
| |-/sys/kernel/debug                      debugfs                             debugfs    rw,relatime
| |-/sys/fs/fuse/connections               fusectl                             fusectl    rw,relatime
| `-/sys/kernel/config                     configfs                            configfs   rw,relatime
|-/proc                                    proc                                proc       rw,relatime
| `-/proc/sys/fs/binfmt_misc               systemd-1                           autofs     rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=15526
|-/dev                                     udev                                devtmpfs   rw,nosuid,relatime,size=8158144k,nr_inodes=2039536,mode=755
| |-/dev/pts                               devpts                              devpts     rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
| |-/dev/shm                               tmpfs                               tmpfs      rw,nosuid,nodev
| |-/dev/hugepages                         hugetlbfs                           hugetlbfs  rw,relatime,pagesize=2M
| `-/dev/mqueue                            mqueue                              mqueue     rw,relatime
|-/run                                     tmpfs                               tmpfs      rw,nosuid,noexec,relatime,size=1642404k,mode=755
| |-/run/lock                              tmpfs                               tmpfs      rw,nosuid,nodev,noexec,relatime,size=5120k
| |-/run/rpc_pipefs                        sunrpc                              rpc_pipefs rw,relatime
| `-/run/user/0                            tmpfs                               tmpfs      rw,nosuid,nodev,relatime,size=1642400k,mode=700
|-/boot/efi                                /dev/sde2                           vfat       rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro
|-/data/image                              data/image                          zfs        rw,xattr,noacl
| `-/data/image/subvol-100-disk-1          data/image/subvol-100-disk-1        zfs        rw,xattr,posixacl
|-/data/storage                            data/storage                        zfs        rw,xattr,noacl
| `-/data/storage/subvol-100-disk-1        data/storage/subvol-100-disk-1      zfs        rw,xattr,posixacl
|-/nas                                     nas                                 zfs        rw,xattr,noacl
| `-/nas/netwk-storage                     nas/netwk-storage                   zfs        rw,xattr,noacl
|   `-/nas/netwk-storage/subvol-100-disk-1 nas/netwk-storage/subvol-100-disk-1 zfs        rw,xattr,posixacl
|-/var/lib/lxcfs                           lxcfs                               fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other
`-/etc/pve                                 /dev/fuse                           fuse       rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other

# df --human
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 7.8G     0  7.8G   0% /dev
tmpfs                                1.6G  9.3M  1.6G   1% /run
/dev/mapper/pve-root                 3.4G  3.2G  4.3M 100% /
tmpfs                                7.9G   43M  7.8G   1% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/sde2                            253M  288K  252M   1% /boot/efi
data/image                            40G     0   40G   0% /data/image
data/image/subvol-100-disk-1         8.0G  7.7G  320M  97% /data/image/subvol-100-disk-1
data/storage                          40G     0   40G   0% /data/storage
data/storage/subvol-100-disk-1        86G   46G   40G  54% /data/storage/subvol-100-disk-1
nas                                  3.1T     0  3.1T   0% /nas
nas/netwk-storage                    3.1T     0  3.1T   0% /nas/netwk-storage
nas/netwk-storage/subvol-100-disk-1  3.6T  431G  3.1T  12% /nas/netwk-storage/subvol-100-disk-1
/dev/fuse                             30M   24K   30M   1% /etc/pve
tmpfs                                1.6G     0  1.6G   0% /run/user/0
 
Here is more from pvereport (trimmed of the vm guest configs, network, firewall, cluster, bios and pci info.)

Code:
==== info about disks ====

# lsblk --ascii
NAME                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                       8:0    0 232.9G  0 disk
|-sda1                    8:1    0 232.9G  0 part
`-sda9                    8:9    0     8M  0 part
sdb                       8:16   0   3.7T  0 disk
|-sdb1                    8:17   0   3.7T  0 part
`-sdb9                    8:25   0     8M  0 part
sdc                       8:32   0 465.8G  0 disk
|-sdc1                    8:33   0   102M  0 part
`-sdc2                    8:34   0 465.7G  0 part
  |-VolGroup00-LogVol00 253:2    0 463.7G  0 lvm
  `-VolGroup00-LogVol01 253:3    0     2G  0 lvm
sdd                       8:48   0 465.8G  0 disk
|-sdd1                    8:49   0 460.2G  0 part
|-sdd2                    8:50   0     1K  0 part
`-sdd5                    8:53   0   5.6G  0 part
sde                       8:64   1  14.5G  0 disk
|-sde1                    8:65   1     1M  0 part
|-sde2                    8:66   1   256M  0 part /boot/efi
`-sde3                    8:67   1  14.3G  0 part
  |-pve-swap            253:0    0   1.8G  0 lvm  [SWAP]
  |-pve-root            253:1    0   3.5G  0 lvm  /
  |-pve-data_tmeta      253:4    0     8M  0 lvm
  | `-pve-data-tpool    253:6    0   7.3G  0 lvm
  |   `-pve-data        253:7    0   7.3G  0 lvm
  `-pve-data_tdata      253:5    0   7.3G  0 lvm
    `-pve-data-tpool    253:6    0   7.3G  0 lvm
      `-pve-data        253:7    0   7.3G  0 lvm
zd0                     230:0    0   128K  0 disk
zd16                    230:16   0    32G  0 disk
zd32                    230:32   0    40G  0 disk
|-zd32p1                230:33   0   500M  0 part
`-zd32p2                230:34   0  39.5G  0 part
zd48                    230:48   0   128K  0 disk
zd64                    230:64   0   128K  0 disk
zd80                    230:80   0    32G  0 disk
|-zd80p1                230:81   0   512M  0 part
|-zd80p2                230:82   0  27.5G  0 part
`-zd80p3                230:83   0     4G  0 part
zd96                    230:96   0   128K  0 disk
zd112                   230:112  0    32G  0 disk
|-zd112p1               230:113  0   512M  0 part
|-zd112p2               230:114  0  27.5G  0 part
`-zd112p3               230:115  0     4G  0 part
zd128                   230:128  0    32G  0 disk
|-zd128p1               230:129  0   450M  0 part
|-zd128p2               230:130  0    99M  0 part
|-zd128p3               230:131  0    16M  0 part
`-zd128p4               230:132  0  31.5G  0 part
zd144                   230:144  0   128K  0 disk
zd160                   230:160  0   128K  0 disk
zd176                   230:176  0    32G  0 disk
|-zd176p1               230:177  0   450M  0 part
|-zd176p2               230:178  0    99M  0 part
|-zd176p3               230:179  0    16M  0 part
`-zd176p4               230:180  0  31.5G  0 part
zd192                   230:192  0    32G  0 disk
|-zd192p1               230:193  0   512M  0 part
|-zd192p2               230:194  0  23.6G  0 part
`-zd192p3               230:195  0   7.9G  0 part
zd208                   230:208  0   128K  0 disk
zd224                   230:224  0   128K  0 disk
zd240                   230:240  0    32G  0 disk
|-zd240p1               230:241  0   512M  0 part
|-zd240p2               230:242  0  27.5G  0 part
`-zd240p3               230:243  0     4G  0 part
zd256                   230:256  0    32G  0 disk
zd272                   230:272  0   128K  0 disk
zd288                   230:288  0    32G  0 disk
|-zd288p1               230:289  0   512M  0 part
`-zd288p2               230:290  0  31.5G  0 part
zd304                   230:304  0   128K  0 disk
zd320                   230:320  0    32G  0 disk
|-zd320p1               230:321  0   512M  0 part
|-zd320p2               230:322  0  23.6G  0 part
`-zd320p3               230:323  0   7.9G  0 part
zd336                   230:336  0   128K  0 disk

==== info about volumes ====

# lvs
  LV       VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LogVol00 VolGroup00 -wi-a----- 463.72g                                                  
  LogVol01 VolGroup00 -wi-a-----   1.94g                                                  
  data     pve        twi-aotz--   7.28g             0.00   0.78                          
  root     pve        -wi-ao----   3.50g                                                  
  swap     pve        -wi-ao----   1.75g                                                  

# vgs
  VG         #PV #LV #SN Attr   VSize   VFree
  VolGroup00   1   2   0 wz--n- 465.66g    0
  pve          1   3   0 wz--n-  14.27g 1.73g

# zpool status
  pool: data
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    data        ONLINE       0     0     0
      sda       ONLINE       0     0     0

errors: No known data errors

  pool: nas
 state: ONLINE
  scan: none requested
config:

    NAME                      STATE     READ WRITE CKSUM
    nas                       ONLINE       0     0     0
      wwn-0x5000c500a4ea8781  ONLINE       0     0     0

errors: No known data errors

# zfs list
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
data                                  185G  39.5G  23.6G  /data
data/image                            109G  39.5G    19K  /data/image
data/image/subvol-100-disk-1         7.69G   319M  7.69G  /data/image/subvol-100-disk-1
data/image/vm-101-disk-1             12.8G  39.5G  12.8G  -
data/image/vm-101-disk-2             16.5K  39.5G  16.5K  -
data/image/vm-102-disk-1             16.2G  39.5G  16.2G  -
data/image/vm-102-disk-2             15.5K  39.5G  15.5K  -
data/image/vm-104-disk-1             15.5K  39.5G  15.5K  -
data/image/vm-105-disk-1                8K  39.5G     8K  -
data/image/vm-105-disk-2             10.5K  39.5G  10.5K  -
data/image/vm-106-disk-1             5.89G  39.5G  5.89G  -
data/image/vm-106-disk-2             16.5K  39.5G  16.5K  -
data/image/vm-110-disk-1             12.6G  39.5G  12.6G  -
data/image/vm-110-disk-2               19K  39.5G    19K  -
data/image/vm-111-disk-1             3.71G  39.5G  3.71G  -
data/image/vm-111-disk-2               16K  39.5G    16K  -
data/image/vm-112-disk-1             14.1G  39.5G  14.1G  -
data/image/vm-112-disk-2               20K  39.5G    20K  -
data/image/vm-113-disk-1             3.44G  39.5G  3.44G  -
data/image/vm-113-disk-2             20.5K  39.5G  20.5K  -
data/image/vm-113-disk-3             21.0G  39.5G  21.0G  -
data/image/vm-114-disk-1               12K  39.5G    12K  -
data/image/vm-114-disk-2               17K  39.5G    17K  -
data/image/vm-115-disk-1             11.3G  39.5G  11.3G  -
data/image/vm-115-disk-2             16.5K  39.5G  16.5K  -
data/iso                             7.01G  39.5G  7.01G  /data/iso
data/storage                         45.9G  39.5G    19K  /data/storage
data/storage/subvol-100-disk-1       45.9G  39.5G  45.9G  /data/storage/subvol-100-disk-1
nas                                   431G  3.09T    96K  /nas
nas/netwk-storage                     431G  3.09T    96K  /nas/netwk-storage
nas/netwk-storage/subvol-100-disk-1   431G  3.09T   431G  /nas/netwk-storage/subvol-100-disk-1
 
1. How do I increase the size of the pve-root partition from 3.8Gb to a higher value, I have 14 Gb elsewhere on the USB stick unused.

At the 14GB media (probably the stick) you have your vg which contains root and there a 1,73 GB free. You can add that space to pve-root by lvextend.


2. Why is my PVE directory for iso storage now referencing space on the pve-root partition and not on my data/iso directory on the SSD? And how to change its to reference to point back at the SSD space. Do I just delete and re-create it?

Looks strange indeed (even the pvereport is not complete and I cannot see everything, to upload the result as file would be more informative), I guess "/data" which is dedicated as mountpoint contains some files in root filesystem too which may happen when you use an not empty directory as mountpoint.

Since the OS is on a stick: read the stick at another computer and check what /data (in root filesystem) contains. If there is something unwanted delete it from there and boot than Proxmox again.
 
Hello again,

I've been able to keep going until now. I have "No space left on device" to perform any actions. So, I tried the lvextend command but that didn't work to create the extra space. Next, I tried to build a new installation on another USB stick this time using a 64GB capacity stick. And tried to re-create the zpools. But re-creating the zpool makes the data stored there disappear. I need instructions how to bring back that data into a fresh proxmox installation.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!