How to add redundancy to the system after the installation ?

Jun 30, 2020
6
2
8
59
Hi all,

I've installed the pve on a system when the hardware only had one ssd disk.
Without reinstall how can i do to add a second ssd to the system for having redundancy on the system and not only the datastore ?
Normally in debian i do this creating a raid1 (md) in degraded mode, copying the system from the single disk to the new degraded raid1, change the boot so the system start and mount the new raid, reboot and finally add the old first disk to the raid1.
There is a similar procedure in PVE ?

my situation is this one
NB: storage1 is already a zfs raid1 but it's a vm container

root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.9G 0 5.9G 0% /dev
tmpfs 1.2G 9.1M 1.2G 1% /run
/dev/mapper/pve-root 28G 5.9G 20G 23% /
tmpfs 5.9G 63M 5.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sdc2 511M 312K 511M 1% /boot/efi
storage1 761G 560G 201G 74% /storage1
/dev/fuse 30M 28K 30M 1% /etc/pve
tmpfs 1.2G 0 1.2G 0% /run/user/0

root@pve:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdc3 pve lvm2 a-- <111.29g 13.87g

root@pve:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 31
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <111.29 GiB
PE Size 4.00 MiB
Total PE 28489
Alloc PE / Size 24938 / 97.41 GiB
Free PE / Size 3551 / 13.87 GiB
VG UUID QgJWVZ-02kU-0p8M-PQbj-J4GU-PBz5-8erBxf

root@pve:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID VTd59R-i0Dp-3RGC-QEag-0jPR-KsAq-SA1M7g
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:48 +0100
LV Status available
# open 2
LV Size 7.00 GiB
Current LE 1792
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0


--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID 7rE3B2-Sl3J-SnbX-IvY8-I0nH-KP2m-ndpTu9
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:48 +0100
LV Status available
# open 1
LV Size 27.75 GiB
Current LE 7104
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID uKXCUb-ZrAs-SgCg-uSqo-VRTx-aMpf-ROj2Ot
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:58 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 1
LV Size 60.66 GiB
Allocated pool data 0.00%
Allocated metadata 1.59%
Current LE 15530
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

some idea to accomplish this task ?

thank you
 
Because you have some spare space on vg pve...

1] prepare ssd2 as raid1 disk (hw raid, mdraid, zfs etc)
2] create vg group "not pve" on that raid1 (if you want pve as vg group name)
3] copy logical volumes from ssd1 to ssd2 - for example, create new lv on ssd2/vg_name, use dd for copy source_vg/source_lv to target_vg/target_lv
4] boot machine into repair mode (install media, for example from debian, dont mount ssds filesystems)
5] rename vg group on the ssd2 to pve
6] regenerate initramfs/grub if needed
 
Hi czechsys,

thank you for your answer

i've already tried something similar but not worked ...
i've created the md0 degradated on the second ssd and afterthat i've created the same partition of the first ssd on the raid1
With dd i've copied all the data of the partition of the first ssd on the partition of the raid1 but there are some missing step that i don't know how to do in proxmox system ...
normally after these step, i change the fstab with the new device or UUID of the new system and i change the grub config adding new raid1 modules and changing the boot devices but in proxmox i don't see how to do that

the device appear to be created dinamically like in udev ... this is the original system installed from the proxmox iso in the single disk
root@pve:~# more /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=0449-6063 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
root@pve:~# l /dev/pve
total 0
drwxr-xr-x 2 root root 80 Jul 31 18:39 .
drwxr-xr-x 22 root root 5260 Jul 31 18:39 ..
lrwxrwxrwx 1 root root 7 Jul 31 18:39 root -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 31 18:39 swap -> ../dm-0


so in the new system i've a dm inside an md ... a bit confused and in effect grub don't start and i've a blink cursor instead of the menu

the sdc disk is so structured (really captured from lsblk)
Code:
sdc                    8:32   0 111.8G  0 disk
├─sdc1                 8:33   0  1007K  0 part
├─sdc2                 8:34   0   512M  0 part /boot/efi
└─sdc3                 8:35   0 111.3G  0 part
  ├─pve-swap         253:0    0     7G  0 lvm  [SWAP]
  ├─pve-root         253:1    0  27.8G  0 lvm  /
  ├─pve-data_tmeta   253:2    0     1G  0 lvm
  │ └─pve-data-tpool 253:4    0  60.7G  0 lvm
  │   └─pve-data     253:5    0  60.7G  0 lvm
  └─pve-data_tdata   253:3    0  60.7G  0 lvm
    └─pve-data-tpool 253:4    0  60.7G  0 lvm
      └─pve-data     253:5    0  60.7G  0 lvm

the sde disk (the new ssd with raid1) is so structured (please don't read the major e minor block number, this is a copy of the previous with the md0 level added just to give you an idea of what i've done). The sdc1 was copied on md0p1 and same for sdc2 and sdc3 on md0p2 and md0p3


Code:
sde                    8:32   0 111.8G  0 disk
└─md0                 8:35   0 111.3G  0 raid disk
  ├─md0p1                 8:33   0  1007K  0 part
  ├─md0p2                 8:34   0   512M  0 part /boot/efi
  └─md0p3                 8:35   0 111.3G  0 part
    ├─pve-swap         253:0    0     7G  0 lvm  [SWAP]
    ├─pve-root         253:1    0  27.8G  0 lvm  /
    ├─pve-data_tmeta   253:2    0     1G  0 lvm
    │ └─pve-data-tpool 253:4    0  60.7G  0 lvm
    │   └─pve-data     253:5    0  60.7G  0 lvm
    └─pve-data_tdata   253:3    0  60.7G  0 lvm
      └─pve-data-tpool 253:4    0  60.7G  0 lvm
        └─pve-data     253:5    0  60.7G  0 lvm



NB: with the raid on the new ssd prepared as i say above, if i enter on the bios for selecting the boot device i can see the proxmox in the efi menu but grub don't start. I think that i must adjust something in this new disk like a pointer on the root fs or some kind of configuration on grub
 
Last edited:
I don't use proxmox with EFI and even mixed with mdraid. Try other way - mount and boot from repair cd your "new raided system", regenerate grub...Can't help more.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!