your machine configuration is on path /etc/pve/nodes/you_node_name/qemu-server/your_machine_VMID.conf
or
qm config <VMID>
If you backup your machine from proxmox backup, backup has machine configuration.
Migration proxmox from 1TB to 2TB is posible and simple if you have zfs on system disk.
Migration form Proxmox 6 to 7 and next migrate 7 to 8.
Thanks Milew. My ZFS storage was created from several HDs, separate from the system disk, which is an M.2 SSD. I'm assuming I can mount an external drive and under the Proxmox Datacenter, add a Backup Job using "mode: stop", and run the job to backup all my VMs. I'm not sure how to backup the ZFS pool configuration so that, when I create a ZFS disk with the newly installed Proxmox 8, I can use the existing pool.
Here is some additional information on my drives setup. Drives sda-sdf were used to create the ZFS pool. One of my VMs is on a separate drive (mounted as plex on sdg/sdg1). The rest of the VMs are all on nvme0n1. The sdh drive is unused.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.1T 0 disk
├─sda1 8:1 0 9.1T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 9.1T 0 disk
├─sdb1 8:17 0 9.1T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 9.1T 0 disk
├─sdc1 8:33 0 9.1T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 9.1T 0 disk
├─sdd1 8:49 0 9.1T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 9.1T 0 disk
├─sde1 8:65 0 9.1T 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 9.1T 0 disk
├─sdf1 8:81 0 9.1T 0 part
└─sdf9 8:89 0 8M 0 part
sdg 8:96 0 2.7T 0 disk
└─sdg1 8:97 0 2.7T 0 part /mnt/plex
sdh 8:112 0 2.7T 0 disk
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 512M 0 part /boot/efi
└─nvme0n1p3 259:3 0 931G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 8.1G 0 lvm
│ └─pve-data-tpool 253:4 0 794.8G 0 lvm
│ ├─pve-data 253:5 0 794.8G 0 lvm
│ ├─pve-vm--105--disk--0 253:6 0 100G 0 lvm
│ ├─pve-vm--104--disk--0 253:7 0 256G 0 lvm
│ ├─pve-vm--108--disk--0 253:8 0 32G 0 lvm
│ ├─pve-vm--100--disk--0 253:9 0 20G 0 lvm
│ ├─pve-vm--101--disk--0 253:10 0 10G 0 lvm
│ ├─pve-vm--110--disk--0 253:11 0 400G 0 lvm
│ └─pve-vm--102--disk--0 253:12 0 50G 0 lvm
└─pve-data_tdata 253:3 0 794.8G 0 lvm
└─pve-data-tpool 253:4 0 794.8G 0 lvm
├─pve-data 253:5 0 794.8G 0 lvm
├─pve-vm--105--disk--0 253:6 0 100G 0 lvm
├─pve-vm--104--disk--0 253:7 0 256G 0 lvm
├─pve-vm--108--disk--0 253:8 0 32G 0 lvm
├─pve-vm--100--disk--0 253:9 0 20G 0 lvm
├─pve-vm--101--disk--0 253:10 0 10G 0 lvm
├─pve-vm--110--disk--0 253:11 0 400G 0 lvm
└─pve-vm--102--disk--0 253:12 0 50G 0 lvm
$ sudo zpool status -v
pool: plexmedia
state: ONLINE
scan: scrub repaired 0B in 12:01:17 with 0 errors on Sun Dec 10 12:25:19 2023
config:
NAME STATE READ WRITE CKSUM
plexmedia ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_JEK182AZ ONLINE 0 0 0
ata-WDC_WD101KFBX-68R56N0_2YJH17RD ONLINE 0 0 0
ata-WDC_WD101KFBX-68R56N0_2YJH9LJD ONLINE 0 0 0
ata-WDC_WD101KFBX-68R56N0_2YJHAW9D ONLINE 0 0 0
ata-WDC_WD101KFBX-68R56N0_JEHLW5MN ONLINE 0 0 0
ata-WDC_WD101KFBX-68R56N0_JEHLXA6N ONLINE 0 0 0
errors: No known data errors
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 26M 3.2G 1% /run
/dev/mapper/pve-root 94G 69G 21G 77% /
tmpfs 16G 43M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/nvme0n1p2 511M 312K 511M 1% /boot/efi
/dev/sdg1 2.7T 168G 2.4T 7% /mnt/plex
plexmedia 37T 23T 14T 62% /mnt/plexmedia
/dev/fuse 30M 24K 30M 1% /etc/pve
tmpfs 3.2G 0 3.2G 0% /run/user/1000