I have a server with 8xSAS. I have a server in cluster (5xPVE) and default storage are SAS disks. But...Now I added 8x SSD and make another disk in PVE.
root@pve4:/etc/pve# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 820.2G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part /boot/efi
└─sda3 8:3 0 820G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 7G 0 lvm
│ └─pve-data-tpool 253:4 0 686G 0 lvm
│ ├─pve-data 253:5 0 686G 0 lvm
│ └─pve-vz 253:6 0 686G 0 lvm /var/lib/vz
└─pve-data_tdata 253:3 0 686G 0 lvm
└─pve-data-tpool 253:4 0 686G 0 lvm
├─pve-data 253:5 0 686G 0 lvm
└─pve-vz 253:6 0 686G 0 lvm /var/lib/vz
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part /ssd
sdc 8:32 0 7.4G 0 disk
├─sdc1 8:33 0 4M 0 part
├─sdc5 8:37 0 250M 0 part
├─sdc6 8:38 0 250M 0 part
├─sdc7 8:39 0 110M 0 part
├─sdc8 8:40 0 286M 0 part
└─sdc9 8:41 0 2.5G 0 part
I`ve made classic disk via cfdisk and mkfs and mount (sdb1). I added this lines to /etc/pve/storage.cfg
dir: SSD
path /ssd
content rootdir,vztmpl,backup,iso,images
maxfiles 1
shared 0
I can make a new machine in ssd disk, but I can`t migrate any machine via web to /ssd, only to default pve data storage /var/lib/vz.
Edit: I was able to migrate vm to another storage as default via shell ( qm migrate 114 pve4 --targetstorage SSD --online --with-local-disks)
I`d like to use best practice, I want to have 2 arrays and I want to choose everytime where I want to install, migrate, rund any VM (ssd or /var/lib/vz (sas disks).
What`s the "right" way?
Btw next step will be merge to ssd arrays from 2 servers via DRBD and maybe over Linstor.
root@pve4:/etc/pve# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 820.2G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part /boot/efi
└─sda3 8:3 0 820G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 7G 0 lvm
│ └─pve-data-tpool 253:4 0 686G 0 lvm
│ ├─pve-data 253:5 0 686G 0 lvm
│ └─pve-vz 253:6 0 686G 0 lvm /var/lib/vz
└─pve-data_tdata 253:3 0 686G 0 lvm
└─pve-data-tpool 253:4 0 686G 0 lvm
├─pve-data 253:5 0 686G 0 lvm
└─pve-vz 253:6 0 686G 0 lvm /var/lib/vz
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part /ssd
sdc 8:32 0 7.4G 0 disk
├─sdc1 8:33 0 4M 0 part
├─sdc5 8:37 0 250M 0 part
├─sdc6 8:38 0 250M 0 part
├─sdc7 8:39 0 110M 0 part
├─sdc8 8:40 0 286M 0 part
└─sdc9 8:41 0 2.5G 0 part
I`ve made classic disk via cfdisk and mkfs and mount (sdb1). I added this lines to /etc/pve/storage.cfg
dir: SSD
path /ssd
content rootdir,vztmpl,backup,iso,images
maxfiles 1
shared 0
I can make a new machine in ssd disk, but I can`t migrate any machine via web to /ssd, only to default pve data storage /var/lib/vz.
Edit: I was able to migrate vm to another storage as default via shell ( qm migrate 114 pve4 --targetstorage SSD --online --with-local-disks)
I`d like to use best practice, I want to have 2 arrays and I want to choose everytime where I want to install, migrate, rund any VM (ssd or /var/lib/vz (sas disks).
What`s the "right" way?
Btw next step will be merge to ssd arrays from 2 servers via DRBD and maybe over Linstor.
Last edited: