Hi all,
I've installed the pve on a system when the hardware only had one ssd disk.
Without reinstall how can i do to add a second ssd to the system for having redundancy on the system and not only the datastore ?
Normally in debian i do this creating a raid1 (md) in degraded mode, copying the system from the single disk to the new degraded raid1, change the boot so the system start and mount the new raid, reboot and finally add the old first disk to the raid1.
There is a similar procedure in PVE ?
my situation is this one
NB: storage1 is already a zfs raid1 but it's a vm container
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.9G 0 5.9G 0% /dev
tmpfs 1.2G 9.1M 1.2G 1% /run
/dev/mapper/pve-root 28G 5.9G 20G 23% /
tmpfs 5.9G 63M 5.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sdc2 511M 312K 511M 1% /boot/efi
storage1 761G 560G 201G 74% /storage1
/dev/fuse 30M 28K 30M 1% /etc/pve
tmpfs 1.2G 0 1.2G 0% /run/user/0
root@pve:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdc3 pve lvm2 a-- <111.29g 13.87g
root@pve:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 31
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <111.29 GiB
PE Size 4.00 MiB
Total PE 28489
Alloc PE / Size 24938 / 97.41 GiB
Free PE / Size 3551 / 13.87 GiB
VG UUID QgJWVZ-02kU-0p8M-PQbj-J4GU-PBz5-8erBxf
root@pve:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID VTd59R-i0Dp-3RGC-QEag-0jPR-KsAq-SA1M7g
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:48 +0100
LV Status available
# open 2
LV Size 7.00 GiB
Current LE 1792
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID 7rE3B2-Sl3J-SnbX-IvY8-I0nH-KP2m-ndpTu9
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:48 +0100
LV Status available
# open 1
LV Size 27.75 GiB
Current LE 7104
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name data
VG Name pve
LV UUID uKXCUb-ZrAs-SgCg-uSqo-VRTx-aMpf-ROj2Ot
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:58 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 1
LV Size 60.66 GiB
Allocated pool data 0.00%
Allocated metadata 1.59%
Current LE 15530
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
some idea to accomplish this task ?
thank you
I've installed the pve on a system when the hardware only had one ssd disk.
Without reinstall how can i do to add a second ssd to the system for having redundancy on the system and not only the datastore ?
Normally in debian i do this creating a raid1 (md) in degraded mode, copying the system from the single disk to the new degraded raid1, change the boot so the system start and mount the new raid, reboot and finally add the old first disk to the raid1.
There is a similar procedure in PVE ?
my situation is this one
NB: storage1 is already a zfs raid1 but it's a vm container
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.9G 0 5.9G 0% /dev
tmpfs 1.2G 9.1M 1.2G 1% /run
/dev/mapper/pve-root 28G 5.9G 20G 23% /
tmpfs 5.9G 63M 5.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sdc2 511M 312K 511M 1% /boot/efi
storage1 761G 560G 201G 74% /storage1
/dev/fuse 30M 28K 30M 1% /etc/pve
tmpfs 1.2G 0 1.2G 0% /run/user/0
root@pve:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdc3 pve lvm2 a-- <111.29g 13.87g
root@pve:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 31
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <111.29 GiB
PE Size 4.00 MiB
Total PE 28489
Alloc PE / Size 24938 / 97.41 GiB
Free PE / Size 3551 / 13.87 GiB
VG UUID QgJWVZ-02kU-0p8M-PQbj-J4GU-PBz5-8erBxf
root@pve:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID VTd59R-i0Dp-3RGC-QEag-0jPR-KsAq-SA1M7g
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:48 +0100
LV Status available
# open 2
LV Size 7.00 GiB
Current LE 1792
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID 7rE3B2-Sl3J-SnbX-IvY8-I0nH-KP2m-ndpTu9
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:48 +0100
LV Status available
# open 1
LV Size 27.75 GiB
Current LE 7104
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name data
VG Name pve
LV UUID uKXCUb-ZrAs-SgCg-uSqo-VRTx-aMpf-ROj2Ot
LV Write Access read/write
LV Creation host, time proxmox, 2020-01-20 19:23:58 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 1
LV Size 60.66 GiB
Allocated pool data 0.00%
Allocated metadata 1.59%
Current LE 15530
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
some idea to accomplish this task ?
thank you