ProxMox VE to RAID1 with new installed disk

alexpebody

New Member
Mar 17, 2021
6
0
1
42
Hi there guys! :cool:

Please, need your advice... I have a little old ProxMox 5.4-13 system in a one drive in my server, i need add this disk /w ProxMox to RAID1, how can i do this?

I found this man: https://help.ubuntu.ru/wiki/migrate-to-raid but i think this is not good idea... 8-( can anybody give to me good advice like step-by-step, how i can add new disk (now ProxMox disk is 300GB, i will add new one another size disk) and create RAID1 with easy clone my ProxMox system?

p.s. I haven't easy way with create RAID1 clearly on my server board and then install ProxMox again... 8-(

p.p.s. I tried after created md0, using dd like: dd if=/dev/sda of=/dev/md0 but after that i got error - size mismatch and my array was crashed...

Thanks for all!
 
Last edited:
Honestly, reinstalling it and using ZFS Raid 1 during the installation is the best way. Once installed, reconfigure the network and restore backups of your guests.

MD Raid is not officially supported and creating it underneath the running systems does seem a bit troublesome.
 
So thx for reply, but reinstall not so good 8-( maybe we will reinstall later, because we need update...

What need to ZFS mirror raid, because we have iso and we installed ProxMox from it, but it haven't ZFS mirror...

Tell me steps please? Need 2 ways, without reinstall and install new?

Thank you.
 
How is the node installed? Can you post the output of lsblk and findmnt --ascii in [code][/code] tags?
 
Ok here is... I need way and know how easy mograte to ZFS RAID1 of need reinstall with choose ZFS RAID1? If you can give a little step-by-step. ProxMox 4.5-13 (i can't find this download iso, just 4.5-1... when i reinstall 5.4-1 will i have some troubles?) So thx.

Code:
root@proxmox:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   1.8T  0 disk
└─sda1                         8:1    0   1.8T  0 part /stor5
sdb                            8:16   0 372.6G  0 disk
├─sdb1                         8:17   0     1M  0 part
├─sdb2                         8:18   0   256M  0 part
└─sdb3                         8:19   0 372.4G  0 part
  ├─pve-root                 253:0    0    93G  0 lvm  /
  ├─pve-swap                 253:4    0     8G  0 lvm  [SWAP]
  ├─pve-data_tmeta           253:5    0   128M  0 lvm
  │ └─pve-data-tpool         253:7    0 255.4G  0 lvm
  │   ├─pve-data             253:8    0 255.4G  0 lvm
  │   └─pve-vm--152--disk--0 253:9    0    80G  0 lvm
  └─pve-data_tdata           253:6    0 255.4G  0 lvm
    └─pve-data-tpool         253:7    0 255.4G  0 lvm
      ├─pve-data             253:8    0 255.4G  0 lvm
      └─pve-vm--152--disk--0 253:9    0    80G  0 lvm
sdc                            8:32   0 931.5G  0 disk
└─sdc1                         8:33   0 931.5G  0 part /stor1
sdd                            8:48   0 931.5G  0 disk
└─sdd1                         8:49   0 931.5G  0 part /stor0
sde                            8:64   0 232.9G  0 disk
├─ssd-vm--123--disk--0       253:1    0    90G  0 lvm
├─ssd-vm--106--disk--0       253:2    0    50G  0 lvm
└─ssd-vm--105--disk--0       253:3    0    80G  0 lvm
sdf                            8:80   0 931.5G  0 disk
└─sdf1                         8:81   0 931.5G  0 part /stor4
sdg                            8:96   0 931.5G  0 disk
└─sdg1                         8:97   0 931.5G  0 part /stor7
sdh                            8:112  0 931.5G  0 disk
└─sdh1                         8:113  0 931.5G  0 part /stor3

Code:
root@proxmox:~# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/sdb3  pve lvm2 a--  372.36g 15.74g
  /dev/sde   ssd lvm2 a--  232.88g 12.88g

root@proxmox:~# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   4   0 wz--n- 372.36g 15.74g
  ssd   1   3   0 wz--n- 232.88g 12.88g

root@proxmox:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 255.36g             9.85   15.45
  root          pve -wi-ao----  93.00g
  swap          pve -wi-ao----   8.00g
  vm-152-disk-0 pve Vwi-a-tz--  80.00g data        31.46
  vm-105-disk-0 ssd -wi-ao----  80.00g
  vm-106-disk-0 ssd -wi-ao----  50.00g
  vm-123-disk-0 ssd -wi-ao----  90.00g

Code:
root@proxmox:~# zpool status
no pools available

Code:
root@proxmox:~# findmnt --ascii
TARGET                                SOURCE               FSTYPE     OPTIONS
/                                     /dev/mapper/pve-root ext4       rw,relatime,errors=remount-ro,data=ordered
|-/sys                                sysfs                sysfs      rw,nosuid,nodev,noexec,relatime
| |-/sys/kernel/security              securityfs           securityfs rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/cgroup                    tmpfs                tmpfs      ro,nosuid,nodev,noexec,mode=755
| | |-/sys/fs/cgroup/unified          cgroup2              cgroup2    rw,nosuid,nodev,noexec,relatime
| | |-/sys/fs/cgroup/systemd          cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
| | |-/sys/fs/cgroup/memory           cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,memory
| | |-/sys/fs/cgroup/pids             cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,pids
| | |-/sys/fs/cgroup/perf_event       cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,perf_event
| | |-/sys/fs/cgroup/net_cls,net_prio cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
| | |-/sys/fs/cgroup/cpu,cpuacct      cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
| | |-/sys/fs/cgroup/devices          cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,devices
| | |-/sys/fs/cgroup/hugetlb          cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,hugetlb
| | |-/sys/fs/cgroup/freezer          cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,freezer
| | |-/sys/fs/cgroup/blkio            cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,blkio
| | |-/sys/fs/cgroup/rdma             cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,rdma
| | `-/sys/fs/cgroup/cpuset           cgroup               cgroup     rw,nosuid,nodev,noexec,relatime,cpuset
| |-/sys/fs/pstore                    pstore               pstore     rw,nosuid,nodev,noexec,relatime
| |-/sys/fs/bpf                       bpf                  bpf        rw,nosuid,nodev,noexec,relatime,mode=700
| |-/sys/kernel/debug                 debugfs              debugfs    rw,relatime
| |-/sys/fs/fuse/connections          fusectl              fusectl    rw,relatime
| `-/sys/kernel/config                configfs             configfs   rw,relatime
|-/proc                               proc                 proc       rw,relatime
| `-/proc/sys/fs/binfmt_misc          systemd-1            autofs     rw,relatime,fd=34,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=20591
|-/dev                                udev                 devtmpfs   rw,nosuid,relatime,size=82447400k,nr_inodes=20611850,mode=755
| |-/dev/pts                          devpts               devpts     rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
| |-/dev/shm                          tmpfs                tmpfs      rw,nosuid,nodev
| |-/dev/hugepages                    hugetlbfs            hugetlbfs  rw,relatime,pagesize=2M
| `-/dev/mqueue                       mqueue               mqueue     rw,relatime
|-/run                                tmpfs                tmpfs      rw,nosuid,noexec,relatime,size=16494016k,mode=755
| |-/run/lock                         tmpfs                tmpfs      rw,nosuid,nodev,noexec,relatime,size=5120k
| |-/run/rpc_pipefs                   sunrpc               rpc_pipefs rw,relatime
| `-/run/user/0                       tmpfs                tmpfs      rw,nosuid,nodev,relatime,size=16494012k,mode=700
|-/stor1                              /dev/sdc1            ext4       rw,relatime,data=ordered
|-/stor0                              /dev/sdd1            ext4       rw,relatime,data=ordered
|-/stor3                              /dev/sdh1            ext4       rw,relatime,data=ordered
|-/stor4                              /dev/sdf1            ext4       rw,relatime,data=ordered
|-/stor7                              /dev/sdg1            ext4       rw,relatime,data=ordered
|-/etc/pve                            /dev/fuse            fuse       rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other
|-/stor5                              /dev/sda1            ext4       rw,relatime,data=ordered
`-/var/lib/lxcfs                      lxcfs                fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other
 
Thanks for the infos.
The OS is not installed on ZFS but on a LVM. This means you will have to reinstall the node and choose ZFS during the installation. If it had already been installed with ZFS on a single disk, there would have been a chance to add another disk to have a mirror.

All the other disks mounted at /stor1 and so on, are they single disks or a hardware RAID?

Also be aware that PVE 4 is end of life for a long time and even PVE 5 is EOL since last summer. The latest version is PVE 6.

To reinstall the server, make notes of the network setup, take backups of the VMs, do a restore test first to make sure the backups work. Then reinstall the node, set up the network and other customizations (Storages for example) and restore the backups of the guests.

If you have VMs that should have little downtime, consider setting up an intermediate PVE server and restore the VMs there until you can move them to the reinstalled server.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!