[SOLVED] moving local storage to a new RAID

El Muchacho

Well-Known Member
I've installed ProxMox 3.x to a 250GB SSD
I want the storage on a software-raid 10 of 4 sata3-drives, each 1TB.

So i thought, it should be the easiest way to move the mount-point to the new LV

old : /dev/mapper/pve-data --> /var/lib/vz
new : /dev/mapper/raid-vms --> /var/lib/vz

my steps:
- created the sw-raid as /dev/md0
- created the PV -> pvcreate /dev/md0
- created the VG -> vgcreate raid /dev/md0
- created an LV -> lvcreate --name vms --size 1500G raid
- addes fs to LV -> mkfs.ext3 /dev/mapper/raid-vms

now switching mount moints ...
- created "dummy-mountpoint" for pve-data -> /dummy
- umount /var/lib/vz
- mount -t ext3 /dev/mapper/pve-data /dummy
- mount -t ext3 /dev/mapper/raid-vms /var/lib/vz

modified /etc/fstab ...
Code:
/dev/pve/data /isos ext3 defaults 0 1
/dev/raid/vms /var/lib/vz ext3 defaults 0 1

df -h shows:
Code:
/dev/mapper/pve-data  152G  188M  152G   1% /dummy
/dev/mapper/raid-vms  1.5T  198M  1.4T   1% /var/lib/vz

looks coorect ...

but webgui proxmox shows:
Code:
/dummy
size : 151.96 GB
used : 188 MB
avail : 151.78 GB

/var/lib/vz
size : 1.44 TB
used : 75.19 GB
avail : 1.37 TB

75.19 GB used !?

restarting pvedaemon or a reboot made no change ...

storage.cfg shows:
Code:
dir: dummy
        path /dummy
        content iso
        maxfiles 1

dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 1

anything missing in my "masterplan" ;) ?

hopefully awaiting ideas ;)

Rico

!!! SOLVED !!!

The 5% disk usage is due to the reserved "super user blocks" by using mkfs.ext3 with default settings ...
Those 5% can be reduced f.e. by using tune2fs -m 3 /dev/raid/vms ...
 
Last edited:
I've installed ProxMox 3.x to a 250GB SSD
I want the storage on a software-raid 10 of 4 sata3-drives, each 1TB
Hi,
you know that sw-raid isn't supported?!
So i thought, it should be the easiest way to move the mount-point to the new LV
I would create na new storage - so you can use the ssd-storage also to find some IO-bottlenecks...

like "/mnt/sw_raid"
old : /dev/mapper/pve-data --> /var/lib/vz
new : /dev/mapper/raid-vms --> /var/lib/vz

my steps:
- created the sw-raid as /dev/md0
- created the PV -> pvcreate /dev/md0
- created the VG -> vgcreate raid /dev/md0
- created an LV -> lvcreate --name vms --size 1500G raid
- addes fs to LV -> mkfs.ext3 /dev/mapper/raid-vms

now switching mount moints ...
- created "dummy-mountpoint" for pve-data -> /dummy
- umount /var/lib/vz
- mount -t ext3 /dev/mapper/pve-data /dummy
- mount -t ext3 /dev/mapper/raid-vms /var/lib/vz

modified /etc/fstab ...
Code:
/dev/pve/data /isos ext3 defaults 0 1
/dev/raid/vms /var/lib/vz ext3 defaults 0 1

df -h shows:
Code:
/dev/mapper/pve-data  152G  188M  152G   1% /dummy
/dev/mapper/raid-vms  1.5T  198M  1.4T   1% /var/lib/vz

looks coorect ...

but webgui proxmox shows:
Code:
/dummy
size : 151.96 GB
used : 188 MB
avail : 151.78 GB

/var/lib/vz
size : 1.44 TB
used : 75.19 GB
avail : 1.37 TB

75.19 GB used !?
Edit - don't read carefully.

What show's "du -ks /var/lib/vz/*"

Udo
 
Last edited:
root@proxmox2:~# df -h /var/lib/vz
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/raid-vms 1.5T 198M 1.4T 1% /var/lib/vz

like described in start-thread ...

@udo ...
ich denke, wir beide könnten auch auf DE kommunizieren ;)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!