Hey all, thank you in advance for any help you can provide.
I have a 3 node cluster, all seems to be running fine. I have 2 ceph pools, ceph300 and ceph500. Each node has 2 x 300gb hdd and 2 x 500gb hdd. They are replication and set to 3/2 the default. I also have a Linux Mint 21 server setup with NFS exports. 1 export is a NFS-ISOs and the other export is a NFS-VMs. I created both in the gui, nothing crazy setup with it.
My NFS settings in Mint
My storage settings in proxmox
NCDU for the host being primarily used
df -h for mainly used node and a different node
I am using this guide https://www.youtube.com/watch?v=6jCEe4sfe_g to transfer VMs from our VMWare environment to our Proxmox environment. Below are the commands
What I am seeing is when I transfer a VM over, it seems to be filling up the local storage even though I have it transferring to ceph500 then moving it to NFS-VMs. With a total of 45GB on the local storage, it seems like it will fail on any bog transfer of 50gb or above.
I have a 3 node cluster, all seems to be running fine. I have 2 ceph pools, ceph300 and ceph500. Each node has 2 x 300gb hdd and 2 x 500gb hdd. They are replication and set to 3/2 the default. I also have a Linux Mint 21 server setup with NFS exports. 1 export is a NFS-ISOs and the other export is a NFS-VMs. I created both in the gui, nothing crazy setup with it.
My NFS settings in Mint
Code:
root@parsecnas:~# cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/export/NFS (rw,no_root_squash)
/export (rw,no_root_squash,fsid=0)
/export/ISO2 (no_root_squash,rw)
root@parsecnas:~# showmount -e
Export list for parsecnas:
/export/ISO2 *
/export *
/export/NFS *
My storage settings in proxmox
Code:
root@proxhost02:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup
shared 0
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
rbd: ceph300
content rootdir,images
krbd 0
pool ceph300
rbd: ceph500
content rootdir,images
krbd 0
pool ceph500
nfs: NFS-VMs
export /export/NFS
path /mnt/pve/NFS-VMs
server ip
content images,rootdir
options vers=3
prune-backups keep-all=1
nfs: NFS-ISOs
export /export/ISO2
path /mnt/pve/NFS-ISOs
server ip
content iso
options vers=3
prune-backups keep-all=1
NCDU for the host being primarily used
Code:
--- / ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
26.2 GiB [############################] /mnt
18.0 GiB [################### ] /root
2.7 GiB [## ] /usr
. 536.0 MiB [ ] /var
93.7 MiB [ ] /boot
66.2 MiB [ ] /dev
5.6 MiB [ ] /etc
1.7 MiB [ ] /run
108.0 KiB [ ] /tmp
e 16.0 KiB [ ] /lost+found
12.0 KiB [ ] /ISO
e 4.0 KiB [ ] /srv
e 4.0 KiB [ ] /opt
e 4.0 KiB [ ] /media
e 4.0 KiB [ ] /home
. 0.0 B [ ] /proc
0.0 B [ ] /sys
@ 0.0 B [ ] libx32
@ 0.0 B [ ] lib64
@ 0.0 B [ ] lib32
@ 0.0 B [ ] sbin
@ 0.0 B [ ] lib
@ 0.0 B [ ] bin
Code:
--- /mnt/pve -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
19.5 GiB [############################] /NFS-VMs
6.7 GiB [######### ] /NFS-ISOs
df -h for mainly used node and a different node
Code:
root@proxhost02:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 1.8M 26G 1% /run
/dev/mapper/pve-root 43G 22G 20G 53% /
tmpfs 126G 67M 126G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/fuse 128M 32K 128M 1% /etc/pve
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-9
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-3
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-2
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-8
ip:/export/ISO2 26T 27G 26T 1% /mnt/pve/NFS-ISOs
ip:/export/NFS 26T 27G 26T 1% /mnt/pve/NFS-VMs
tmpfs 26G 0 26G 0% /run/user/0
root@proxhost03:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 1.7M 26G 1% /run
/dev/mapper/pve-root 43G 3.0G 38G 8% /
tmpfs 126G 66M 126G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/fuse 128M 32K 128M 1% /etc/pve
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-4
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-10
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-11
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-5
ip:/export/ISO2 26T 27G 26T 1% /mnt/pve/NFS-ISOs
ip:/export/NFS 26T 27G 26T 1% /mnt/pve/NFS-VMs
tmpfs 26G 0 26G 0% /run/user/0
I am using this guide https://www.youtube.com/watch?v=6jCEe4sfe_g to transfer VMs from our VMWare environment to our Proxmox environment. Below are the commands
Code:
ovftool --parallelThreads=24 vi://root@ip/Home\ Assistant/ ceph500
Opening VI source: vi://root@ip:443/Home%20Assistant/
Opening OVF target: ceph500
Writing OVF package: ceph500/Home Assistant/Home Assistant.ovf
Transfer Completed
qm importovf 100 ceph500/Home\ Assistant/Home\ Assistant.ovf NFS-VMs
transferred 32.0 GiB of 32.0 GiB (100.00%)
What I am seeing is when I transfer a VM over, it seems to be filling up the local storage even though I have it transferring to ceph500 then moving it to NFS-VMs. With a total of 45GB on the local storage, it seems like it will fail on any bog transfer of 50gb or above.