Cluster NFS and CEPH keep filling up Local Storage

bughatti

Renowned Member
Oct 11, 2014
17
0
66
Hey all, thank you in advance for any help you can provide.

I have a 3 node cluster, all seems to be running fine. I have 2 ceph pools, ceph300 and ceph500. Each node has 2 x 300gb hdd and 2 x 500gb hdd. They are replication and set to 3/2 the default. I also have a Linux Mint 21 server setup with NFS exports. 1 export is a NFS-ISOs and the other export is a NFS-VMs. I created both in the gui, nothing crazy setup with it.

My NFS settings in Mint

Code:
root@parsecnas:~# cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/export/NFS     (rw,no_root_squash)
/export (rw,no_root_squash,fsid=0)
/export/ISO2    (no_root_squash,rw)
root@parsecnas:~# showmount -e
Export list for parsecnas:
/export/ISO2 *
/export      *
/export/NFS  *

My storage settings in proxmox

Code:
root@proxhost02:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

rbd: ceph300
        content rootdir,images
        krbd 0
        pool ceph300

rbd: ceph500
        content rootdir,images
        krbd 0
        pool ceph500

nfs: NFS-VMs
        export /export/NFS
        path /mnt/pve/NFS-VMs
        server ip
        content images,rootdir
        options vers=3
        prune-backups keep-all=1

nfs: NFS-ISOs
        export /export/ISO2
        path /mnt/pve/NFS-ISOs
        server ip
        content iso
        options vers=3
        prune-backups keep-all=1

NCDU for the host being primarily used

Code:
--- / ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   26.2 GiB [############################] /mnt
   18.0 GiB [###################         ] /root
    2.7 GiB [##                          ] /usr
. 536.0 MiB [                            ] /var
   93.7 MiB [                            ] /boot
   66.2 MiB [                            ] /dev
    5.6 MiB [                            ] /etc
    1.7 MiB [                            ] /run
  108.0 KiB [                            ] /tmp
e  16.0 KiB [                            ] /lost+found
   12.0 KiB [                            ] /ISO
e   4.0 KiB [                            ] /srv
e   4.0 KiB [                            ] /opt
e   4.0 KiB [                            ] /media
e   4.0 KiB [                            ] /home
.   0.0   B [                            ] /proc
    0.0   B [                            ] /sys
@   0.0   B [                            ]  libx32
@   0.0   B [                            ]  lib64
@   0.0   B [                            ]  lib32
@   0.0   B [                            ]  sbin
@   0.0   B [                            ]  lib
@   0.0   B [                            ]  bin

Code:
--- /mnt/pve -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   19.5 GiB [############################] /NFS-VMs
    6.7 GiB [#########                   ] /NFS-ISOs

df -h for mainly used node and a different node

Code:
root@proxhost02:~# df -h
Filesystem                 Size  Used Avail Use% Mounted on
udev                       126G     0  126G   0% /dev
tmpfs                       26G  1.8M   26G   1% /run
/dev/mapper/pve-root        43G   22G   20G  53% /
tmpfs                      126G   67M  126G   1% /dev/shm
tmpfs                      5.0M     0  5.0M   0% /run/lock
/dev/fuse                  128M   32K  128M   1% /etc/pve
tmpfs                      126G   28K  126G   1% /var/lib/ceph/osd/ceph-9
tmpfs                      126G   28K  126G   1% /var/lib/ceph/osd/ceph-3
tmpfs                      126G   28K  126G   1% /var/lib/ceph/osd/ceph-2
tmpfs                      126G   28K  126G   1% /var/lib/ceph/osd/ceph-8
ip:/export/ISO2   26T   27G   26T   1% /mnt/pve/NFS-ISOs
ip:/export/NFS    26T   27G   26T   1% /mnt/pve/NFS-VMs
tmpfs                       26G     0   26G   0% /run/user/0



root@proxhost03:~# df -h
Filesystem                 Size  Used Avail Use% Mounted on
udev                       126G     0  126G   0% /dev
tmpfs                       26G  1.7M   26G   1% /run
/dev/mapper/pve-root        43G  3.0G   38G   8% /
tmpfs                      126G   66M  126G   1% /dev/shm
tmpfs                      5.0M     0  5.0M   0% /run/lock
/dev/fuse                  128M   32K  128M   1% /etc/pve
tmpfs                      126G   28K  126G   1% /var/lib/ceph/osd/ceph-4
tmpfs                      126G   28K  126G   1% /var/lib/ceph/osd/ceph-10
tmpfs                      126G   28K  126G   1% /var/lib/ceph/osd/ceph-11
tmpfs                      126G   28K  126G   1% /var/lib/ceph/osd/ceph-5
ip:/export/ISO2   26T   27G   26T   1% /mnt/pve/NFS-ISOs
ip:/export/NFS    26T   27G   26T   1% /mnt/pve/NFS-VMs
tmpfs                       26G     0   26G   0% /run/user/0

I am using this guide https://www.youtube.com/watch?v=6jCEe4sfe_g to transfer VMs from our VMWare environment to our Proxmox environment. Below are the commands

Code:
ovftool --parallelThreads=24 vi://root@ip/Home\ Assistant/ ceph500

Opening VI source: vi://root@ip:443/Home%20Assistant/
Opening OVF target: ceph500
Writing OVF package: ceph500/Home Assistant/Home Assistant.ovf
Transfer Completed

qm importovf 100 ceph500/Home\ Assistant/Home\ Assistant.ovf NFS-VMs

transferred 32.0 GiB of 32.0 GiB (100.00%)

What I am seeing is when I transfer a VM over, it seems to be filling up the local storage even though I have it transferring to ceph500 then moving it to NFS-VMs. With a total of 45GB on the local storage, it seems like it will fail on any bog transfer of 50gb or above.
 
The commands "ovftool" and "qm importovf" do not operate on the Ceph RBD pool but on a local directory called "ceph500".

Maybe you should create a directory below /mnt/pve/NFS-VMs for the temporary storage of the OVF file.
Sorry, this seems a bit confusing. Every linux machine I have created in 20 years, when adding an NFS share, I am able to write to that NFS share and it never fills up local /mnt, no matter what type of file I use. Plus all of the documents, videos and forums I have looked at never say anything about having to go back and create a Directory share on top of an NFS share, or even the ceph
 
To convert you simply need something like local storage, either it's the local hard drive or an NFS share that is mounted. However, you cannot simply specify the storage ID of Proxmox VE. For example, you can create a folder directly under /mnt/pve/NFS-VMs and carry out the conversion there.

Many commands interact with Proxmox VE and resolve these paths independently, but not here. Ultimately, the /etc/pve/storage.cfg is just an index file that tells Proxmox VE which storages there are, how and where they are integrated and what they are. The actual integration is usually done using the standard commands.

As soon as you have converted everything, you can move the virtual disk, then the storage ID will be resolved accordingly and your virtual disk will end up in the CEPH, for example.
 
A Ceph storage pool is not a filesystem where any command can write to.
Perhaps another way to illustrate this, if one were to type:
ovftool --parallelThreads=24 vi://root@ip/Home\ Assistant/ AWS_S3

one should not expect the file to be sent to AWS, it will be written to a local file called AWS_S3.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: gurubert

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!