Hello,
We are using 4.2 and are trying to upgrade our container from Ubuntu 14 to 16.
We have had no success at all since the container keeps on saying :
root@API-Server:/# do-release-upgrade -d
Checking for a new Ubuntu release
No new release found
Of course we have been doing the usual...
Ok - I found the solution by myself :
I have created a new container and compared the semantic in /etc/pve/lxc/
And I found out that my data were defined as :
rootfs: ZFS:subvol-100-disk-1,size=10G
Where they should instead look like :
rootfs: volume=local-zfs:subvol-100-disk-1,size=10G...
Hello,
I was experimenting a bit with pve-zsync
All of our backups are done on a FreeNAS server and I was able to very easily backup the container on my FreeNAS server.
I am now trying to restore the snapshot from my FreeNAS server.
Restoring went very well and I now have a clean pool with...
[SOLVED]
Ok - after a series of hassles :
BIOS update
UEFI update
SAS controller update
I have found out what the problem really was all about.
You need to tune your Controller SAS-2308 in order to set it with the maximum possible device to scan for the device.
The parameter is ...
Thanks for this answer.
So the root cause of the main scenario (RAIDZ-2 or 10) is related to the disks not being all correctly displayed by grub after install ?
And you suspect a BIOS setting to be the root cause.
Strange thing is that disks are being correctly displayed when I boot on the...
I have re-installed using a simple ZFS RAID-0 on one disk only (same disk as the one used for ext4 boot) and I had yet another error…
With a "cannot import 'rpool'"
We have re-installed with a basic install using ext-4 on the first disk detected by our HBA and (beside a slower install time) everything went as expected !
Server has booted.
So I can confirm that there is a bug in the installer when It comes to ZFS in RAID 10 or ZFS in RAID-Z2
Not really...
Hello folks,
We are fighting to install PVE 4.2 on a brand new Supermicro with HBA and 5 SAS disks installed on It.
Configuration is quite straightforward and all disks are seen at install time.
First goal was to configure PVE on 4 disks in RAID 10 ZFS : I selected the four target disks and...
[SOLVED]
Ok, after reading and analyzing the situation, I concluded that I might have skipped a reboot after a kernel upgrade.
So I did reboot, and that solved my issue !
Thanks for your help !
Nope…
It looks like the command line should be more :
# modprobe nfs4
The other command returned an error :
root@proxmini:/ # modprobe nfsv4
modprobe: ERROR: could not insert 'nfsv4': Invalid argument
If I do an lsmod to find out more about NFS, I got those results :
root@proxmini:/...
Hello Dietmar,
Thanks for your answer.
Of course the protocol is supported on our server (and as I wrote, It is working in OpenVZ hosts).
Unfortunately It is not working with the line you sent me :
root@proxmini:/home/gregober# mount -t nfs -o...
Hello,
I have planned to migrate my old install of proxmox 3.4 to 4.1 in the coming days.
But unfortunately, we have some NFS mounted in v.4 under 3.4
The test server that we have in 4.1 does not seem very happy when we try to mount or force mount NFS v4…
When mounting manually we have a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.