No idea why people are using NFS-Ganesha???
Created a fresh CT,
copied, adjusted and reloaded an apparmor profile for it:
root@proxmox07:~# cat /etc/apparmor.d/lxc/lxc-default-with-nfs2ceph
# Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles...
...and that was it with NFS-Ganesa:
https://github.com/nfs-ganesha/nfs-ganesha/blob/4e0b839f74608ce7005e533eda1431c730257662/src/FSAL/FSAL_CEPH/export.c#L307
* Currently, there is no interface for looking up a snapped
* inode, so we just bail here in that case.
*/...
Yes it does. But as soon as the cron.d killed all the ganesha.nfsd process on all of the five CTs there is nowhere to move the IP to.
This is a part of my keepalived config:
rstumbaum@controlnode01.dc1:~$ cat keepalived/conf.d/check_proc_ganesha.conf
vrrp_script check_proc_ganesha {...
Excellent test! The nfs-ganesha systemd.unit file is crap! After a pkill -9 it does not start automatically again, so I am going to loose the NFS exports as soon as I am through with the cycle!
Have to add Restart=always there...
Good idea! Trying that now!
Yes. The NFS servers have each 7 ethernet devices: admin access, Ceph Public Network, 5 storage networks dedicated to NFS traffic to the VMs. Each VM has two network interfaces: storage access and application network. Storage access is a MTU 9000 non-routed network...
I am building NFS-Ganesha now using a Docker container and the Debian build tools.
rstumbaum@controlnode01.dc1:~/docker-nfs-ganesha-build$ cat Dockerfile
ARG DEBIAN_RELEASE="buster"
ARG CEPH_RELEASE_PVE="nautilus"
FROM debian:${DEBIAN_RELEASE} AS build-env
ARG DEBIAN_RELEASE
ARG...
Ich habe das hier noch mal aufgegriffen, den Beitrag könnte man Liken:
https://forum.proxmox.com/threads/ha-nfs-service-for-kvm-vms-on-a-proxmox-cluster-with-ceph.80967/post-363314
So I am following down this path now:
- On the 5 production nodes install 5 minimal CTs with NFS-Ganesha on Debian
root@nfsshares-a:~# grep '^[[:blank:]]*[^[:blank:]#;]' /etc/ganesha/ganesha.conf
NFS_CORE_PARAM
{
Enable_NLM = false;
Enable_RQUOTA = false;
Protocols = 3,4...
I am currently looking into the NFS Ganesha keepalived active/passive two VMs path. Adding additional cephx client authorizations on the Proxmox VE Ceph storage does not void the enterprise support, right?
@alexskysilk , by using a NFS based readonly image I just create a DHCP entry and boot directly via the network from the NFS server.
Maybe I made myself not properly clear on how our current setup works.
https://ltsp.org/ is a project where they use that concept for clients. We use such a setup...
Hi @Alwin,
we are currently still running all of our Debian Linux VMs as PXE-booted diskless NFS-Root machines. We have all applications (in a disabled state) installed into one image, create a snapshot and assign that readonly snapshot using DHCP to the VMs. Based on the hostname a config file...
Hi @samontetro ,
so what happens if you want to reboot the CentOS7 VM?
- Do your NFS clients stall during that time?
- Do your NFS clients just reconnect?
From my point of view you have a Single Point of Failure with that single VM.
Thanks for your message though.
Rainer
Hi,
we are migrating from a VMware ESXi setup with a NetApp NFS based shared storage.
We also did use NFS filesystems for mounts like /home or /root and application filesystems like a shared /var/www within our virtual machines and host-specific filesystems like /var/log.
Most of our...
I just edited the config again in the WebUI and saved it.
So it might have looked differently before:
root@pbs02:~# cat /etc/proxmox-backup/datastore.cfg
datastore: pve-infra-onsite
comment PVE Cluster infra - On-Site backup for quick restores only
gc-schedule 00:00...
This is strange...
Proxmox
Backup Server 1.0-5
()
2020-12-10T00:00:00+01:00: Starting datastore prune on store "pve-infra-onsite"
2020-12-10T00:00:00+01:00: task triggered by schedule 'daily'
2020-12-10T00:00:00+01:00: retention options: --keep-last 55 --keep-hourly 48 --keep-daily 7...
Hi,
this is my setup for the Prune & GC job:
But even though running manually it does not remove older snapshots....
Is this probably a permission problem?
Thanks
Rainer
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.