Shared Drive from Cluster causing one host to fill local disk space

Apr 2, 2022
4
0
1
Good morning all! I have a Proxmox cluster running with two hosts, with the following versions:
Both running PVE 8.1.3

Host 2 (newest) - Kernel 6.5.11-5-pve (community edition w/o subscription Just added subscription w/enterprise repo). Last updated last night ~10pm US EST

Host 1 (the 'OG') - Kernel 6.2.16-12-pve (subscription w/Enterprise repo). Last updated last night ~9pm US EST

On Host 1 I have a local USB mounted 1TB RDX 'tape' drive (basically the tapes are HDDs in a cartridge format if you are not familiar with them). I use it for backups. When I added Host 2, I shared out the RDX tape drive from the cluster so that I could backup the VMs on the new host. That all seemingly worked... I thought.

Yesterday I logged into the cluster to do monthly maintenance of updates and what have you and found I couldn't update host 2 due to the local root drive being full. Odd, as I had nothing on it sans 2 isos (ubuntu live server iso and the virtio drivers iso). I removed those just the same, ensured logging rotation et all was good, searched high and low for what could be eating up the space, and the only thing I could find was that ~65gb was being taken up by /mnt/pve/RDX_Backups which is the backups drive.

Ok, well, I looked in there, and only saw the VM backups for that host. Odd. I cleared them out for good measure (only one is in 'production' in my home lab as I'm building the other out). I took a new backup and saw local disk space had cleared up and then started chewing up again. Ok, very, very strange. Tried unsharing the RDX drive in the cluster and setting it to only host 1, but that had no effect as far as the mount point going away. Back on host 1, the RDX drive seems right and doesn't count against the local storage and I see all the other VMs on it. Only other thing I recalled doing was setting in /etc/exports on host 1 a test mount point to export to the other host, but there is nothing in /etc/fstab mounting it, nor can I mount that anywhere else on host 2 so clearly it wasn't working. For whatever reason I never documented that in my home wiki, but anyways...

I am at a loss. I haven't rebooted host 1 since last night's update just yet. I did reboot host 2 a few times, including after unsharing the RDX drive but it still shows in that mount point which I don't understand..it should not be counting against that local root's disk space unless that mount point isn't really the RDX drive...which I'm starting to think may be the case?

Anyone have any ideas what to look at, at this point? I'm just at a loss why this mount point would stay persistently...do I need to remove sharing in the cluster and then reboot host 1 for it to fully clear? Or something else?

edit: updated subscription info on host 2.
---------


Final edit: solved!
Basically did the following to resolve the issue:
- On host1, installed requsitive nfs server services, updated /etc/export with /mt/pve/RDX_Backups *(rw,sync,no_root_squash,no_subtree_check) and restarted the necessary service.

-On host2, mkdir /media/rdx_backups and then edited /etc/fstab to add the mount against the new directory, followed by mount -a to get it running

-On the datacenter gui, added the new rdxbackupsHost2 entry for directory and specified /media/rdx_backups for host2. Et voila, added backups accordingly and now its going to the rdx tape correctly.
 
Last edited:
I shared out the RDX tape drive from the cluster so that I could backup the VMs on the new host.
what exactly does this mean?
Only other thing I recalled doing was setting in /etc/exports on host 1 a test mount point to export to the other host, but there is nothing in /etc/fstab mounting it, nor can I mount that anywhere else on host 2 so clearly it wasn't working. For whatever reason I never documented that in my home wiki
Sounds as, perhaps, it was never mounted properly. Without exact commands, steps, outputs - you are asking for speculations.
I did reboot host 2 a few times, including after unsharing the RDX drive but it still shows in that mount point which I don't understand
what exactly does "show in that mount point" means to you?

Its really helpful when descriptions of the problem are combined with solid facts, ie command line outputs (encoded in code tags).

Anyone have any ideas what to look at, at this point?
I'd recommend starting from scratch, documenting your steps and confirming that the configuration that you put in place is persistent.

good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'll try to answer as best I can at the moment:

Under Datacenter -> Storage -> rdx_backups (which is the RDX tape drive on Host 1 connected to it via USB) has been added.
Directory: /mnt/pve/RDX_Backups
Content: VZDump backup file
Nodes: All
Enable : yes
Shared: yes

On host 1 I had installed the nfs components necessary for sharing NFS mount points:
On host 1 under /etc/exports is:
/mt/pve/RDX_Backups *(rw,sync,no_root_squash,no_subtree_check)


On host 2: last night attempted the following:
mkdir /media/test
mount -t nfs host1:/pve/RDX_Backups /media/test
<output was nothing - hung indefinitely - never mounted)

in /etc/fstab there is nothing for mounting that drive.

In the GUI for host 2 it shows rdx_backups (host2) on host2 with 89.63GB (which is what the local disk is...)
for Host1 it shows rdx_backups (host1) at the correct 1TB of space (well 983.35GB after formatting and what have you).

After getting host2 on a paid subscription and on the same repo and fully updated, I rebooted both it and host1 a few minutes ago. Both showing same kernel info now (Kernel 6.5.11-5-pve) and are on even keel somewhat.

I'd recommend starting from scratch, documenting your steps and confirming that the configuration that you put in place is persistent.

What do you mean by this exactly? Remove the RDX_Backups from the cluster storage configuration and blank out /etc/exports ? (tried doing this somewhat last night, though never rebooted host1).

It makes 0 sense to me why rdx_backups on host2 is effectively NOT the actual rdx_backups that should be accessible to it. It's like the cluster said, sure, here's a mount point but for you it's your local drive.
 
Update:

Just did the following to remove things:

Datacenter: removed rdx_backups

Host2: rm -rf /mnt/pve/RDX_Backups (only thing I could see in there was the local backups)

Host1: removed the entry from /etc/exports and restarted the nfs utils and nfs client services (nfs server apparently not installed?)

The RDX_Backups is gone now from both, and local space on host2 is now normal.

Update 1:
on Host1, I still had /mnt/pve/RDX_Backups as a local directory added in on the hosts's directory listing.

Update 2:
In Data Center, added storage -> Directory -> rdxbackups at mount point /mnt/pve/RDX_Backups/ only for host1.
This shows correctly.

So, I think I know what the issue is/was. When 'sharing' that, host 2 was not actually mounting it across the network and instead that created a local mount point that was actually using local disk space instead of the actual rdx tape drive. Fun.

So now the quesiton is, how to share that out properly. Guess I'll be looking over the NFS bits for Proxmox again to try and get the mount point exported properly for NFS acces.
 
Last edited:
1 Marking storage "shared" does not actually share that storage. You may know this already, but it often is news to many users.
2 /mnt/pve is a special location where PVE will attempt to mount external disks
3 You used "directory" storage, which is telling PVE that it should expect same content on two hosts. But because its "directory" - its your responsibility to make it available
4 Configuring NFS server is out of scope of PVE "configuration and installation." There are many many guides available online
5 Since you are not using NFS storage type in PVE (ie you used Directory) - its your responsibility to mount the NFS export on host2 in appropriate location and ensure that its available when needed at appropriate time.

You've made this too complex for a home setup.
a) Ensure that your backup disk is mounted on boot via generic Linux mechanism. Mount it outside of PVE special directory (/mnt/pve) , ie to /mnt/backups
b) Create a directory object in PVE that is NOT shared and restricted to host1 that points to /mnt/backups. Mark it "mountpoint=yes" so that you dont end up writing to local disk if your USB was not accessible
c) Export it via NFS
d) Create NFS type storage that is restricted to host2 and points to host1

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: ANerdyDad
1 Marking storage "shared" does not actually share that storage. You may know this already, but it often is news to many users.
2 /mnt/pve is a special location where PVE will attempt to mount external disks
3 You used "directory" storage, which is telling PVE that it should expect same content on two hosts. But because its "directory" - its your responsibility to make it available
4 Configuring NFS server is out of scope of PVE "configuration and installation." There are many many guides available online
5 Since you are not using NFS storage type in PVE (ie you used Directory) - its your responsibility to mount the NFS export on host2 in appropriate location and ensure that its available when needed at appropriate time.

You've made this too complex for a home setup.
a) Ensure that your backup disk is mounted on boot via generic Linux mechanism. Mount it outside of PVE special directory (/mnt/pve) , ie to /mnt/backups
b) Create a directory object in PVE that is NOT shared and restricted to host1 that points to /mnt/backups. Mark it "mountpoint=yes" so that you dont end up writing to local disk if your USB was not accessible
c) Export it via NFS
d) Create NFS type storage that is restricted to host2 and points to host1

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you for the reply again!

It dawned on me too as I was redoing and typing it up (see above). LOL. That's what I get for troubleshooting late at night :)

Yes, that is news to me that the shared isn't really shared. When I had read into it before, I interpreted it incorrectly that proxmox was doing it's own version of a nfs type share across the cluster. D'oh!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!