Beginner questions

cartwright

New Member
Sep 5, 2017
10
0
1
37
Hi,

Very new to proxmox and had a few questions and clarification requests.

I have 6 disks, 5 in a LVM volume group. The 6th disk is hosting proxmox.

What I am trying to do is load a fileserver container on the 6th disk that is hosting proxmox and attach the LVM group as attached storage. I am having a hard time trying to figure that out. When I add the LVM volume group to proxmox it's showing 100% usage which I don't understand. They are brand new disks. I have read that the LVM module is utilizing 100% of the pool but the data within is empty is that what is going on here?

Past that I have two NICs and would like to team them together. How might I achieve that?
 
Hi,

pleas send the output off

Code:
lvs

Past that I have two NICs and would like to team them together. How might I achieve that?

In Linux terminology it called bonding.
Have a look to the reference documentation section network.
 
here is the output

root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data_volume data_pool -wi-a----- 18.19t
data pve twi-a-tz-- 150.64g 0.00 0.44
root pve -wi-ao---- 58.00g
swap pve -wi-ao---- 8.00g
 
Can you also send the '/etc/pve/storage.cfg' and the output of the command 'lsblk'
 
here they are.

root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
└─data_pool-data_volume 253:5 0 18.2T 0 lvm
sdb 8:16 0 3.7T 0 disk
└─data_pool-data_volume 253:5 0 18.2T 0 lvm
sdc 8:32 0 3.7T 0 disk
└─data_pool-data_volume 253:5 0 18.2T 0 lvm
sdd 8:48 0 3.7T 0 disk
└─data_pool-data_volume 253:5 0 18.2T 0 lvm
sde 8:64 0 3.7T 0 disk
└─data_pool-data_volume 253:5 0 18.2T 0 lvm
nvme0n1 259:0 0 232.9G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
├─nvme0n1p2 259:2 0 256M 0 part /boot/efi
└─nvme0n1p3 259:3 0 232.6G 0 part
├─pve-root 253:0 0 58G 0 lvm /
├─pve-swap 253:1 0 8G 0 lvm [SWAP]
├─pve-data_tmeta 253:2 0 76M 0 lvm
│ └─pve-data 253:4 0 150.7G 0 lvm
└─pve-data_tdata 253:3 0 150.7G 0 lvm
└─pve-data 253:4 0 150.7G 0 lvm

root@pve:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,vztmpl,iso

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

lvm: sata-storage
vgname data_pool
content rootdir,images
shared 1
 
All ok but why is your sata-storage shared.
This flag is for cluster where the nodes share a storage.

please send the output of

Code:
mount
 
I had checked the shared box during setup at the GUI level. Honestly didn't know any better. I've un-checked that now.

Output from the mount command.

root@pve:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=8126708k,nr_inodes=2031677,m ode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmod e=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1628608k,mode=755)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relat ime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xa ttr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,rela time)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatim e,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,re latime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime ,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hu getlb)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpu set)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blki o)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,fr eezer)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,mem ory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,de vices)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,time out=0,minproto=5,maxproto=5,direct,pipe_ino=14349)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/nvme0n1p2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepag e=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,grou p_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,d efault_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=1628604k,mode=700 )
root@pve:~#
 
Just following up here. Due to the 100% usage on my LVM pool i am not able to create containers or VMs. I'm not really sure what to do here. Is there a guide of some sort I can follow? I'd really like to use this product but i'm starting to worry this is completely over my head to the point that I will have to settle for a lesser solution such as freeNAS which isn't ideal since ZFS requires such a massive page file.