So I have an old server and a new server, both are running proxmox. I was attempting to migrate about 600GB of data. It didn't need to be migrated quickly, but it's taken nearly two weeks. I ran the command inside of screen. One of the servers rebooted due to power outage and it literally erased...
I am using a package called s3fs which allows you to mount s3 object storage on a linux machine as a disk.
When trying to mount it, I get the error
fuse: device not found, try 'modprobe fuse' first
After referencing this thread, I was able to figure out that I need to run the following...
Very possible, I would have to wait until next weekend to try it since it takes about 15 hours to transfer. My only hesitation to point to that as the issue is that I did an identical migration from and to the same Proxmox servers, with the exact same settings and had no issues.
I have attempted a direct dd over ssh migration of a particular VM which is an an older Proxmox 5.4-5 server to a new server running 6.1-7 three different times now, and I end up having the same issue everytime. This is the command that I used to do the migration (worked fine for my other VMs)...
I just encountered the same issue today, written to the USB drive exactly how it was described in the manual, using DD. This was ProxMox 5.1. This is a very simple issue that needs to be fixed.... for some reason this is hard coded in as a cdrom in the syslinux config, and it only works when you...
I am sorry if I shouldn't be posting this here, but I have this exact same issue on ProxMox 4, except it is a KVM VM rather than an LXC container, running CentOS7.
Is it normal behavior for ZFS in ProxMox 4 to only allow 6 drives? My server has 8 drives but I am only allowed to add 6 of them to any sort of ProxMox RAID Configuration.
I am using the command "pct restore", what option to I have to append so that I can specify the storage location. Essentially, it creates the .raw disk image file on local storage, but fails to create the disk. I also cannot upload a backup to the local storage via the Web GUI, it gives the...
How exactly were you able to get the OpenVZ container to run on the Rpool? I created an rpool, but I cannot get the actual container to run on the rpool.
I just fired up a new ProxMox PVE 4.0 server with ZFS and followed the proxmox instructions on how to restore my OpenVZ containers to it. I scp'ed the file over to /var/lib/vz/dump and then attempted to restore it using the pct restore command, but I get an error:
Allocating group tables: 0/64...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.