You can migrate everything over before adding the 2 disks from the old server, then add the 2 disks to the new server as hinted previously.
Google for the other question.
If you installed using the 2 drives in the old mirror, then no grub install is necessary. No balancing will happen, your array will be unbalanced which evens out on the long term.
I'm in a similar situation with a few Dell servers with H710 where and JBOD or direct attached disks are unavailable. The closest to that is a single-drive logical disk in RAID0 on the controller for each individual disk. Do not use arrays with multiple disks, let ZFS do all the RAID functions...
IIRC I had to install systemd-sysv or install sysvinit and ditch systemd entirely in one or 2 similar cases with mariadb. There is a known compatibility problem regardung lcx and systemd in Jessie. I didn't dig deeper then bacause I lacked time to do that. In one case I installed systemd-sysv...
I had a similar error when I had my cache and log devices on LVM volumes. I changed them to use simple partitions and the pool was imported at boot. It was possible to fix in other ways but the solution was sufficient.
I recommend doing the same, though your situation is different as your...
OK, put the missing line in the file and you're good to go. Make sure to put an empty line at the end (an emply standalone line is not necessary but easier to make sure).
Weird. The big data volume has the defaults flag which includes auto, meaning that at issuing mount -a or at startup it mounts automatically. Yet it didn't. However, try to issue simply "mount /var/lib/vz" and see what happens.
From your post it looks to me that you haven't mounted the freshly created ext4 volume. I see you editing fstab but please show its contents and also the contents of /proc/mounts. The size and usage of the VMs storage is consistent with your root volume, further hinting the big volume is not...
You should convert your lvm-thin to directory based storage and you could simply restore the old dumps as is. For that you need to remove the thin pool, create a suitably sized lvm volume in place of it, format it as ext4, add as directory storage on the PVE web admin and done. LVM thin is more...
Quick update. It doesn't work on the upstream Ubuntu kernel either. Used kernel version:
Linux upstest1 4.4.0-77-generic #98-Ubuntu SMP Wed Apr 26 08:34:02 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
It is most probably not related to PVE, only the kernel it uses. Will try to submit a bug report...
I have actually just finished such a test. I've set up a PVE 3.4 instance in a KVM VM, passed the appropriate USB device to it and - it works.
root@pve3test:~# apcaccess
APC : 001,028,0707
DATE : 2017-05-04 14:33:41 +0200
HOSTNAME : pve3test
VERSION : 3.14.10 (13 September 2011) debian...
Thanks for answering, I see you have an APC model. I have MGE/Eaton Ellipse models that have been working perfectly fine on PVE 3.4. The problem might only come up when MGE (or specific models) AND PVE 4.4 is used together.
I know it's not exactly an apcupsd UPS support site, but I hope that someone else might have seen the same issues as I, as it possibly relates to the PVE kernel.
I've upgraded 2 systems where previously we ran PVE 3.4 latest. These systems used apcupsd for smaller MGE/Eaton Ellipse UPS boxes...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.