I've installed ProxMox 3.x to a 250GB SSD
I want the storage on a software-raid 10 of 4 sata3-drives, each 1TB.
So i thought, it should be the easiest way to move the mount-point to the new LV
old : /dev/mapper/pve-data --> /var/lib/vz
new : /dev/mapper/raid-vms --> /var/lib/vz
my steps:
- created the sw-raid as /dev/md0
- created the PV -> pvcreate /dev/md0
- created the VG -> vgcreate raid /dev/md0
- created an LV -> lvcreate --name vms --size 1500G raid
- addes fs to LV -> mkfs.ext3 /dev/mapper/raid-vms
now switching mount moints ...
- created "dummy-mountpoint" for pve-data -> /dummy
- umount /var/lib/vz
- mount -t ext3 /dev/mapper/pve-data /dummy
- mount -t ext3 /dev/mapper/raid-vms /var/lib/vz
modified /etc/fstab ...
	
	
	
		
df -h shows:
	
	
	
		
looks coorect ...
but webgui proxmox shows:
	
	
	
		
75.19 GB used !?
restarting pvedaemon or a reboot made no change ...
storage.cfg shows:
	
	
	
		
anything missing in my "masterplan" ?
 ?
hopefully awaiting ideas
Rico
!!! SOLVED !!!
The 5% disk usage is due to the reserved "super user blocks" by using mkfs.ext3 with default settings ...
Those 5% can be reduced f.e. by using tune2fs -m 3 /dev/raid/vms ...
				
			I want the storage on a software-raid 10 of 4 sata3-drives, each 1TB.
So i thought, it should be the easiest way to move the mount-point to the new LV
old : /dev/mapper/pve-data --> /var/lib/vz
new : /dev/mapper/raid-vms --> /var/lib/vz
my steps:
- created the sw-raid as /dev/md0
- created the PV -> pvcreate /dev/md0
- created the VG -> vgcreate raid /dev/md0
- created an LV -> lvcreate --name vms --size 1500G raid
- addes fs to LV -> mkfs.ext3 /dev/mapper/raid-vms
now switching mount moints ...
- created "dummy-mountpoint" for pve-data -> /dummy
- umount /var/lib/vz
- mount -t ext3 /dev/mapper/pve-data /dummy
- mount -t ext3 /dev/mapper/raid-vms /var/lib/vz
modified /etc/fstab ...
		Code:
	
	/dev/pve/data /isos ext3 defaults 0 1
/dev/raid/vms /var/lib/vz ext3 defaults 0 1df -h shows:
		Code:
	
	/dev/mapper/pve-data  152G  188M  152G   1% /dummy
/dev/mapper/raid-vms  1.5T  198M  1.4T   1% /var/lib/vzlooks coorect ...
but webgui proxmox shows:
		Code:
	
	/dummy
size : 151.96 GB
used : 188 MB
avail : 151.78 GB
/var/lib/vz
size : 1.44 TB
used : 75.19 GB
avail : 1.37 TB75.19 GB used !?
restarting pvedaemon or a reboot made no change ...
storage.cfg shows:
		Code:
	
	dir: dummy
        path /dummy
        content iso
        maxfiles 1
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 1anything missing in my "masterplan"
 ?
 ?hopefully awaiting ideas

Rico
!!! SOLVED !!!
The 5% disk usage is due to the reserved "super user blocks" by using mkfs.ext3 with default settings ...
Those 5% can be reduced f.e. by using tune2fs -m 3 /dev/raid/vms ...
			
				Last edited: 
				
		
	
										
										
											
	
										
									
								 
	