Hello,
i have installed proxmox on a 16G DOM.
( + 2 SSD + 2HD SATA )
Is it correct increase the size of /dev/pve/root to 8Gb to have more space for logs ?
Now i see:
vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 14.50 GiB
PE Size 4.00 MiB
Total PE 3711
Alloc PE / Size 1602 / 6.26 GiB
Free PE / Size 2109 / 8.24 GiB
VG UUID OeXdGs-APvk-bw6y-Uxs1-qPUN-5Rc7-2tF3X1
--------------------------------------------------------------------------------------
lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID L3t9fm-VVUj-aRYF-UbHS-Ojrz-jCUM-QuvWpO
LV Write Access read/write
LV Creation host, time proxmox, 2017-03-01 12:49:37 +0100
LV Status available
# open 2
LV Size 1.75 GiB
Current LE 448
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:1
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID BL7EWK-cXQO-kRme-vrnK-2gJR-fnES-0wAJ8L
LV Write Access read/write
LV Creation host, time proxmox, 2017-03-01 12:49:37 +0100
LV Status available
# open 1
LV Size 3.50 GiB
Current LE 896
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:0
--- Logical volume ---
LV Name data
VG Name pve
LV UUID xZhS8k-Vm0g-I9pV-HonC-5CVH-Hu9F-A5eCHP
LV Write Access read/write
LV Creation host, time proxmox, 2017-03-01 12:49:38 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size 1.00 GiB
Allocated pool data 0.00%
Allocated metadata 0.98%
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:4
--------------------------------------------------------------------------------------
free -h
total used free shared buffers cached
Mem: 62G 1.3G 61G 48M 16M 133M
-/+ buffers/cache: 1.2G 61G
Swap: 1.7G 0B 1.7G
--------------------------------------------------------------------------------------
zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
storageSSD 408K 215G 96K /storageSSD
storageSSD/LXC 96K 215G 96K /storageSSD/LXC
storageSata 100M 1.76T 100M /storageSata
--------------------------------------------------------------------------------------
Thanks!
i have installed proxmox on a 16G DOM.
( + 2 SSD + 2HD SATA )
Is it correct increase the size of /dev/pve/root to 8Gb to have more space for logs ?
Now i see:
vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 14.50 GiB
PE Size 4.00 MiB
Total PE 3711
Alloc PE / Size 1602 / 6.26 GiB
Free PE / Size 2109 / 8.24 GiB
VG UUID OeXdGs-APvk-bw6y-Uxs1-qPUN-5Rc7-2tF3X1
--------------------------------------------------------------------------------------
lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID L3t9fm-VVUj-aRYF-UbHS-Ojrz-jCUM-QuvWpO
LV Write Access read/write
LV Creation host, time proxmox, 2017-03-01 12:49:37 +0100
LV Status available
# open 2
LV Size 1.75 GiB
Current LE 448
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:1
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID BL7EWK-cXQO-kRme-vrnK-2gJR-fnES-0wAJ8L
LV Write Access read/write
LV Creation host, time proxmox, 2017-03-01 12:49:37 +0100
LV Status available
# open 1
LV Size 3.50 GiB
Current LE 896
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:0
--- Logical volume ---
LV Name data
VG Name pve
LV UUID xZhS8k-Vm0g-I9pV-HonC-5CVH-Hu9F-A5eCHP
LV Write Access read/write
LV Creation host, time proxmox, 2017-03-01 12:49:38 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size 1.00 GiB
Allocated pool data 0.00%
Allocated metadata 0.98%
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:4
--------------------------------------------------------------------------------------
free -h
total used free shared buffers cached
Mem: 62G 1.3G 61G 48M 16M 133M
-/+ buffers/cache: 1.2G 61G
Swap: 1.7G 0B 1.7G
--------------------------------------------------------------------------------------
zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
storageSSD 408K 215G 96K /storageSSD
storageSSD/LXC 96K 215G 96K /storageSSD/LXC
storageSata 100M 1.76T 100M /storageSata
--------------------------------------------------------------------------------------
Thanks!
Last edited: