I need some help replacing a couple of smaller drives with another couple of bigger ones, while maintaining the same ZFS filesystem but enlarging the pool and also switching to EFI boot.
This is on PVE 6.4 in preparation for upgrading to 7.
This is what /dev/sda and /dev/sdb look like right...
I've installed the latest Proxmox VE on a new machine and the web interface keeps kicking me out. It's not the inactivity timeout, because it kicks me out even when I click something different every 3 seconds.
I've looked at this problem reported in the past and most times it's been clock/time...
So here's the setup in a nutshell:
two nodes: node1, node2 in a cluster
cluster is healthy and running the latest version of PVE, all updates applied
node1 storages: local, local-zfs, pool1
node2 storages: local, local-zfs, pool2
The question is: how do I move a container that runs on node1...
Is it possible to disconnect a zfs-based storage from a Proxmox machine that has containers on it and then reconnect it and restore the containers from it?
I need to add the machine in question to a cluster and don't have the space to back up those containers. So I was thinking if I could...
I'm thinking of a CPU without iGPU, using a discrete GPU for installation, then removing it and running it without any display adapter at all.
I realize that for any troubleshooting or BIOS upgrades I would need to add again a GPU, but the question is, does it need a GPU to run?
Anyone tried this?
I've done a manual backup of a KVM VM (Windows) to NFS.
I chose the 'Stop' mode and Gzip compression.
Here's the tail of the log:
INFO: status: 100% (161061273600/161061273600), sparse 78% (127045570560), duration 2178, read/write 456/0 MB/s
INFO: transferred 161061 MB in 2178 seconds (73 MB/s)...
I just noticed on my Proxmox 5.0 machine that one of the hard drives is listed as of type "Unknown" and was wondering what's the cause?
The drive is accessible, mounted and SMART stats look OK.
Do you think there is anything to worry about?
I had only one hard drive (/dev/sda) when I installed Proxmox 5.0 and I chose ZFS for storage.
Now I've bought and added another identical drive (/dev/sdb) and want to turn rpool into a mirror and make both drives bootable so the machine can survive a drive failure.
Here's how /dev/sda looks...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.