i can access from lcx inside proxmox with user user2_ldap,
but from desktop (outside of proxmox) with mount using fstab with the same user(user2_ldap) i can go in and out to internal directories but i cannot access its files contents
issue still exist, still looking for help\guidance
i am creating the mount under root user (sudo mount -a)
and trying to access it as another user (local\DC user) not as cephx (this user not really exist, it is exist only to create the share)
i can see the file folder and files are mounted...
i did not do anything. just tried to edit it again and it works, i can disable the ceph from the backup option
thanks, but i still dont know what relay fixed it
ip is hidden
Cluster information
-------------------
Name: vq-pve
Config Version: 42
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Mon Jan 27 11:50:06 2020
Quorum provider: corosync_votequorum
Nodes: 10
Node ID...
i know this is an old question?
but i could not find a workaround other then removing binds, then cloning, and then adding them back
anyone know a better solution ?
update :have some progress i can mount it and see the files but i dont have permission to read the files :(
ceph-mon1.storage:6789:/data /mnt/ftd ceph name=cephx,secretfile=/etc/ceph/ceph.client.cephx.keyring,ro,_netdev,noatime 0 0
working only when setting only / in path when creating authorization
i could not mount sub dir, only root
works:
mount -t ceph ceph-mon1.storage:6789,ceph-mon2.storage:6789,ceph-mon3.storage:6789:/ /mnt/mycephfs -o name=cephx,secretfile=/etc/ceph/ceph.client.cephx.keyring
does not work...
i followed the gide above,
and when i mount i get
mount error 13 = Permission denied
i executed the command:
mount -t ceph ceph-mon1.storage:6789:/ /mnt/mycephfs -o name=cephx,secretfile=/etc/ceph/ceph.client.cephx.keyring
yes but also take approx 10% of the total value (for journaling and such)
we moved to ceph from freenass (was good but it is limited in performance and capacity as single node.), and we are mainly need to as READ to provide enoth data to our claster, so ceph was ideal, i am still figuring out...
if you are on replication 3, each data is copied 3 times + some overhead
for each 1 TB of RAW data you will get approx 300MB usable data
adding 10 hdds of 2TB Each will give you approx 6TB usable data
i have usefully integrated ceph(proxmox based) in all the lxc containers,
now i want to integrate it outside of proxmox for some user for read only access , to replace the current nfs share,
what do i need to do ? what params to put in /etc/fstab
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.