[SOLVED] CephFS max file size

Apr 20, 2021
6
1
8
39
Milano
Hello everyone, I have 3 servers all the same with 16core/32Thread AMD EPYC, 256GB Ram, 2x 1TB SSD NVMe in ZFS RAID1 as OS, and 4x 3.2TB SSD NVMe as ceph storage for VMs drives, 2x4TB HDD in RAID0 for fast-local backup.
These three servers are clustered together and connected with dedicated 10Gbps LAN for Ceph + 10Gbps to the internal LAN.
Currently 12 virtual machines are need to run on them, which I have spread out 4 for each server. The machines have been imported from an ESXI 6.7 node on an 8TB internal disk, using VMWare's ovftools and then imported in the ceph filesystem (using qm importovf).
I have already successfully migrated 11 VMs except the last one because it has a 1.7TB VMDK disk, searching the internet I agree that CephFS has a filesystem defaults limit to 1TB for files. I also found the command for which the parameter can be changed:
fs set <fs name> max_file_size <size in bytes>

I tried changing the value to 2TB but the result is this:
root@srv03:~# fs set cephfs max_file_size 2199023255552
-bash: fs: command not found

I have tried searching on internet and on Proxmox documentation in the wiki but it does not seem to be possible to find this setting into the pveceph command.
How can I increase this setting so that I can then move the VM disk?

Thank you in advance for your reply.
Kind Regards
Mattia
 
Last edited:
Try ceph fs ... instead of just "fs".

You don't seem to have much experience, so please... be careful.

Have fun!
I have already tried but not recognise the sub command.
The pveceph seems use fs subcommand only for “create” and “destroy” argument.
Like I already wrote I have not found anything on the docs also .
 
Okay...

my test cluster has Ceph. But "ceph fs" is not a valid command.

Code:
~# ceph fs
no valid command found;

Should have tested this earlier...

This is for:
Code:
~# pveversion
pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-6-pve)

Actually I am not a "real" Ceph user. I am not sure if that "fs"-command should be available on Proxmox VE. Sorry...
 
Okay...

my test cluster has Ceph. But "ceph fs" is not a valid command.

Code:
~# ceph fs
no valid command found;

Should have tested this earlier...

This is for:
Code:
~# pveversion
pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-6-pve)

Actually I am not a "real" Ceph user. I am not sure if that "fs"-command should be available on Proxmox VE. Sorry...
I'm stil use same version of your PVE, pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390).

I have tested with root user from shell, should be tried with another users?
Code:
~# su ceph
This account is currently not available.

Any other ideas? Anyone else who might be able to help?

I found this on the same/similar problem: https://forum.proxmox.com/threads/backup-of-vm-fails-broken-pipe.120320/post-523034

the solution is the same command but how does it apply? under what conditions?
 
Before you do anything, check that you actually have an issue:

# ceph fs get [fsname] | grep max_file_size

then to change it,

# ceph fs set [fsname] max_file_size [size_in_bytes]

if the command fails, try issuing it from a node housing your mds- you could be missing packages on the node issuing.
 
Thanks Alexskysilk, the command you suggested worked correctly!
I was now able to expand the maximum file capacity of the volume.

root@srv01:~# ceph fs set cephfs max_file_size 5497558138880 root@srv01:~# ceph fs get cephfs | grep max_file_size max_file_size 5497558138880

I am now trying to move the virtual disk (1.7TB) but I will definitely have no problems.
Thank you very much.
Kind regards
Mattia
 
Last edited:
  • Like
Reactions: OsvaldoP
Yes I tried moving the disk file and it didn't give any problems. Only currently the configuration is as a “directory” on original Ceph_data and so when we activate HA it won't work. In the evening I will create the RDB pool and try to move the disks of all the vms to have full HA enabled.
 
Yes I tried moving the disk file and it didn't give any problems. Only currently the configuration is as a “directory” on original Ceph_data and so when we activate HA it won't work. In the evening I will create the RDB pool and try to move the disks of all the vms to have full HA enabled.
I did some testing this evening and, as I remembered from that studied in the documentation, even with the directory and the "shared" flag active, HA works without any problems.
Unfortunately, by activating a new pool now, the TBs of the VMs that are already on the main pool are lost. (7.29TB instead 11.10TB)
If I am not abnormal to use it in this way, so I would say it is perfect and working, I attach screeshot of storage panel.

What would be the difference between using the RBD (PVE) format and the directory on the direct filesystem?

Screenshot 2023-08-14 alle 23.59.47.png
 
What would be the difference between using the RBD (PVE) format and the directory on the direct filesystem?
RBD is designed to be used as virtual block storage. it means your guests have direct access to the underlying filesystem, and you get tight integration with native compression, snapshots, etc.

When you use the cephfs as a store for qcow2 files, you get massive write amplification due to CoW on CoW implementation. it also means that your snapshot mechanism sits on top of the original but you dont get any benefit. you end up with slower performance and more wasted space. There is no use case where this is preferrable.

If you're doing your own system engineering, it would be worth your while to understand the technology. see https://docs.ceph.com/en/quincy/architecture/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!