So I have an old server and a new server, both are running proxmox. I was attempting to migrate about 600GB of data. It didn't need to be migrated quickly, but it's taken nearly two weeks. I ran the command inside of screen. One of the servers rebooted due to power outage and it literally erased...
I am using a package called s3fs which allows you to mount s3 object storage on a linux machine as a disk.
When trying to mount it, I get the error
fuse: device not found, try 'modprobe fuse' first
After referencing this thread, I was able to figure out that I need to run the following...
I have attempted a direct dd over ssh migration of a particular VM which is an an older Proxmox 5.4-5 server to a new server running 6.1-7 three different times now, and I end up having the same issue everytime. This is the command that I used to do the migration (worked fine for my other VMs)...
Is it normal behavior for ZFS in ProxMox 4 to only allow 6 drives? My server has 8 drives but I am only allowed to add 6 of them to any sort of ProxMox RAID Configuration.
I just fired up a new ProxMox PVE 4.0 server with ZFS and followed the proxmox instructions on how to restore my OpenVZ containers to it. I scp'ed the file over to /var/lib/vz/dump and then attempted to restore it using the pct restore command, but I get an error:
Allocating group tables: 0/64...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.