[SOLVED] Add Storage - what does the content type do?

scyto

Active Member
Aug 8, 2023
376
69
28
When one adds storage the content types can be set (containers, templates, snippets, etc etc).

What does this actually do?
Does this change the way the storage is fundamentally handles (cacheing, access, etc) or is it just about where it appears in other functions in proxmox UI?
 
No, it doesnt change anything about storage just defines where you are allowed to use it.
For example, if there is a location that is suitable for backup and you never want to use it for anything else, then it does not make sense to mark it for anything but backups.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@bbgeek17 thanks!

I wish we could set a null content type then, lol, this is for a cephFS i will be passing through to a VM with virtioFSD :) as such none of the content types apply.

thanks for unblocking me!
 
Last edited:
  • Like
Reactions: bbgeek17
you dont actually have to add cephfs into pve if you dont intend to use it for pve... but in context of your usecase, why bother using cephfs at all? just create an rbd device and map directly to your vm.
 
you dont actually have to add cephfs into pve if you dont intend to use it for pve... but in context of your usecase, why bother using cephfs at all? just create an rbd device and map directly to your vm.
can 3 VM access the RBD at the same time?
I am pretty sure that's a very bad idea....

the scenario here is:
  1. a replicated file system
  2. that is surfaced up into 3 docker VMs (one on each node)
  3. docker is configured so that each stack stores persistent data within the CephFS
  4. if one docker VM fails the container in the VM will start on one of the other swarm nodes
  5. (i handle write conflict issues by ensuring the swarm stack only allows one instance of the container to run on any given docker node vm)
this is to replace the GlusterFS i have been using with the VMs for the last couple of years
it has worked well other than cranky GlusterFS volume driver plugins disabling themsleves at first sign of network trouble

the docker swarm cluster setup (all running on VMs on proxmox)
the proxmox cluster

i would have the swarm use cephFS directly but all the cephFS volume driver plugins are old and unmaintained
and drallas was good enough to document this approach which i want to try

i may yet revert to using one of the cepFS volume drivers
 
Last edited:
can 3 VM access the RBD at the same time?
ah I see what you're trying to do. in this case, the method is still going to be different then what you're thinking:

1. set up separate public and private interfaces for ceph. make the public interface attached to a bridge.
2. connect each of your vm's to the ceph public bridge.
3. install ceph-common on each, and set up keys as appropriate.
4. mount cephfs directly on each guest. if this is explicitly for docker, you may not need to mount it and just use a cephfs docker plugin (https://github.com/Brindster/docker-plugin-cephfs)
 
4. mount cephfs directly on each guest. if this is explicitly for docker, you may not need to mount it and just use a cephfs docker plugin (https://github.com/Brindster/docker-plugin-cephfs)
see my previous note about the unmaintained status of the common 3 cephFS volume drivers,

I am *fully* aware of the multiple approaches - i am evaluating and playing and just needed someone to answer my question and not be this guy You Don't Wanna Do It Like That! - Harry Enfield & Chums - YouTube ;-)

Also why bother connecting over the virtualized network if passing up via virtioFSD can work at memory speed...

part of this will be seeing which is faster.... and before you reply, yes i know the way network access works on hypervisors and it should also be at memory speed...

and this is all about playing and evaluating
 
Last edited:
in my current network setup the docker host VMs are connected to vmbr0 and this limits network bandwidth to 2.5Gbe so installing the fuse client on the docker nodes will likely be slower than using virtioFSD.... but maybe that doesn't matter as the cephFS IOPS load will be low for this purpose - nice academic thought exercise tho....

I need to put some though into what the network topology modifications would be needed to allow the docker nodes to use the loopback on the PVE host instead....
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!