Search results

  1. K

    zfs raid mirror 1 to raid 10

    You can migrate everything over before adding the 2 disks from the old server, then add the 2 disks to the new server as hinted previously. Google for the other question.
  2. K

    zfs raid mirror 1 to raid 10

    If you installed using the 2 drives in the old mirror, then no grub install is necessary. No balancing will happen, your array will be unbalanced which evens out on the long term.
  3. K

    zfs raid mirror 1 to raid 10

    It's apparent what you have since you posted your zfs status output. If you do what I suggested you'll end up with a stripe of mirrors (~raid10).
  4. K

    zfs raid mirror 1 to raid 10

    zpool add rpool mirror /dev/disk/by-id/disk1 /dev/disk/by-id/disk2 Should do it.
  5. K

    Promox VE installation screen resolution problems

    Doesn't OP have a problem with the screen resolution of the installer? How would using another imaging tool solve that?
  6. K

    ZFS poor performance

    I'm in a similar situation with a few Dell servers with H710 where and JBOD or direct attached disks are unavailable. The closest to that is a single-drive logical disk in RAID0 on the controller for each individual disk. Do not use arrays with multiple disks, let ZFS do all the RAID functions...
  7. K

    MariaDB problem after Proxmox upgrade

    IIRC I had to install systemd-sysv or install sysvinit and ditch systemd entirely in one or 2 similar cases with mariadb. There is a known compatibility problem regardung lcx and systemd in Jessie. I didn't dig deeper then bacause I lacked time to do that. In one case I installed systemd-sysv...
  8. K

    Need some help to recover degraded ZFS raid

    I had a similar error when I had my cache and log devices on LVM volumes. I changed them to use simple partitions and the pool was imported at boot. It was possible to fix in other ways but the solution was sufficient. I recommend doing the same, though your situation is different as your...
  9. K

    Assistance with migrating from Proxmox 1.9 to 4.4

    I think you will be able to put a line in a text file without further help ;)
  10. K

    Assistance with migrating from Proxmox 1.9 to 4.4

    You can read too, can't you? The following line is not present in the fstab: /dev/pve/data /var/lib/vz ext4 defaults 0 1
  11. K

    Assistance with migrating from Proxmox 1.9 to 4.4

    OK, put the missing line in the file and you're good to go. Make sure to put an empty line at the end (an emply standalone line is not necessary but easier to make sure).
  12. K

    Assistance with migrating from Proxmox 1.9 to 4.4

    Here's your problem. Please post the contents of the WHOLE /etc/fstab file. "cat /etc/fstab" and copypasta the output.
  13. K

    Assistance with migrating from Proxmox 1.9 to 4.4

    Weird. The big data volume has the defaults flag which includes auto, meaning that at issuing mount -a or at startup it mounts automatically. Yet it didn't. However, try to issue simply "mount /var/lib/vz" and see what happens.
  14. K

    Assistance with migrating from Proxmox 1.9 to 4.4

    From your post it looks to me that you haven't mounted the freshly created ext4 volume. I see you editing fstab but please show its contents and also the contents of /proc/mounts. The size and usage of the VMs storage is consistent with your root volume, further hinting the big volume is not...
  15. K

    Assistance with migrating from Proxmox 1.9 to 4.4

    You should convert your lvm-thin to directory based storage and you could simply restore the old dumps as is. For that you need to remove the thin pool, create a suitably sized lvm volume in place of it, format it as ext4, add as directory storage on the PVE web admin and done. LVM thin is more...
  16. K

    apcupsd on pve 4.4 not working properly

    Quick update. It doesn't work on the upstream Ubuntu kernel either. Used kernel version: Linux upstest1 4.4.0-77-generic #98-Ubuntu SMP Wed Apr 26 08:34:02 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux It is most probably not related to PVE, only the kernel it uses. Will try to submit a bug report...
  17. K

    apcupsd on pve 4.4 not working properly

    Unfortunately it only proves it used to work...
  18. K

    apcupsd on pve 4.4 not working properly

    I have actually just finished such a test. I've set up a PVE 3.4 instance in a KVM VM, passed the appropriate USB device to it and - it works. root@pve3test:~# apcaccess APC : 001,028,0707 DATE : 2017-05-04 14:33:41 +0200 HOSTNAME : pve3test VERSION : 3.14.10 (13 September 2011) debian...
  19. K

    apcupsd on pve 4.4 not working properly

    Thanks for answering, I see you have an APC model. I have MGE/Eaton Ellipse models that have been working perfectly fine on PVE 3.4. The problem might only come up when MGE (or specific models) AND PVE 4.4 is used together.
  20. K

    apcupsd on pve 4.4 not working properly

    I know it's not exactly an apcupsd UPS support site, but I hope that someone else might have seen the same issues as I, as it possibly relates to the PVE kernel. I've upgraded 2 systems where previously we ran PVE 3.4 latest. These systems used apcupsd for smaller MGE/Eaton Ellipse UPS boxes...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!