Search results

  1. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    No idea why people are using NFS-Ganesha??? Created a fresh CT, copied, adjusted and reloaded an apparmor profile for it: root@proxmox07:~# cat /etc/apparmor.d/lxc/lxc-default-with-nfs2ceph # Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which # will source all profiles...
  2. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    ...and that was it with NFS-Ganesa: https://github.com/nfs-ganesha/nfs-ganesha/blob/4e0b839f74608ce7005e533eda1431c730257662/src/FSAL/FSAL_CEPH/export.c#L307 * Currently, there is no interface for looking up a snapped * inode, so we just bail here in that case. */...
  3. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Yes it does. But as soon as the cron.d killed all the ganesha.nfsd process on all of the five CTs there is nowhere to move the IP to. This is a part of my keepalived config: rstumbaum@controlnode01.dc1:~$ cat keepalived/conf.d/check_proc_ganesha.conf vrrp_script check_proc_ganesha {...
  4. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Excellent test! The nfs-ganesha systemd.unit file is crap! After a pkill -9 it does not start automatically again, so I am going to loose the NFS exports as soon as I am through with the cycle! Have to add Restart=always there...
  5. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Good idea! Trying that now! Yes. The NFS servers have each 7 ethernet devices: admin access, Ceph Public Network, 5 storage networks dedicated to NFS traffic to the VMs. Each VM has two network interfaces: storage access and application network. Storage access is a MTU 9000 non-routed network...
  6. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    From the NFS client it is barely noticable. I currently run a cron.d reboot script like this 1-59/5 * * * * root hostname | grep -qE 'nfsshares-a' && /bin/systemctl reboot 2-59/5 * * * * root hostname | grep -qE 'nfsshares-b' && /bin/systemctl reboot 3-59/5 * * * * root hostname | grep -qE...
  7. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    I am building NFS-Ganesha now using a Docker container and the Debian build tools. rstumbaum@controlnode01.dc1:~/docker-nfs-ganesha-build$ cat Dockerfile ARG DEBIAN_RELEASE="buster" ARG CEPH_RELEASE_PVE="nautilus" FROM debian:${DEBIAN_RELEASE} AS build-env ARG DEBIAN_RELEASE ARG...
  8. R

    CEPHS NFS-Ganesha

    Ich habe das hier noch mal aufgegriffen, den Beitrag könnte man Liken: https://forum.proxmox.com/threads/ha-nfs-service-for-kvm-vms-on-a-proxmox-cluster-with-ceph.80967/post-363314
  9. R

    Proxmox VE 6.1 + in LXC: Ubuntu 18.04 + nfs-server: rpc-gssd.service: Job rpc-gssd.service/start failed with result 'dependency'.

    You might want to vote this up: https://forum.proxmox.com/threads/ha-nfs-service-for-kvm-vms-on-a-proxmox-cluster-with-ceph.80967/post-363314
  10. R

    nfs-kernel-server on lxc: yes or no?

    You might want to vote this up: https://forum.proxmox.com/threads/ha-nfs-service-for-kvm-vms-on-a-proxmox-cluster-with-ceph.80967/post-363314
  11. R

    nfs error in lxc

    You might want to vote this up: https://forum.proxmox.com/threads/ha-nfs-service-for-kvm-vms-on-a-proxmox-cluster-with-ceph.80967/post-363314
  12. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    So I am following down this path now: - On the 5 production nodes install 5 minimal CTs with NFS-Ganesha on Debian root@nfsshares-a:~# grep '^[[:blank:]]*[^[:blank:]#;]' /etc/ganesha/ganesha.conf NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 3,4...
  13. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    I am currently looking into the NFS Ganesha keepalived active/passive two VMs path. Adding additional cephx client authorizations on the Proxmox VE Ceph storage does not void the enterprise support, right?
  14. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    @alexskysilk , by using a NFS based readonly image I just create a DHCP entry and boot directly via the network from the NFS server. Maybe I made myself not properly clear on how our current setup works. https://ltsp.org/ is a project where they use that concept for clients. We use such a setup...
  15. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Hi @Alwin, we are currently still running all of our Debian Linux VMs as PXE-booted diskless NFS-Root machines. We have all applications (in a disabled state) installed into one image, create a snapshot and assign that readonly snapshot using DHCP to the VMs. Based on the hostname a config file...
  16. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Hi @samontetro , so what happens if you want to reboot the CentOS7 VM? - Do your NFS clients stall during that time? - Do your NFS clients just reconnect? From my point of view you have a Single Point of Failure with that single VM. Thanks for your message though. Rainer
  17. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Hi, we are migrating from a VMware ESXi setup with a NetApp NFS based shared storage. We also did use NFS filesystems for mounts like /home or /root and application filesystems like a shared /var/www within our virtual machines and host-specific filesystems like /var/log. Most of our...
  18. R

    [SOLVED] Pruning seems not to prune: Keep last = 3, but more than 11 snapshots are kept

    I just edited the config again in the WebUI and saved it. So it might have looked differently before: root@pbs02:~# cat /etc/proxmox-backup/datastore.cfg datastore: pve-infra-onsite comment PVE Cluster infra - On-Site backup for quick restores only gc-schedule 00:00...
  19. R

    [SOLVED] Pruning seems not to prune: Keep last = 3, but more than 11 snapshots are kept

    This is strange... Proxmox Backup Server 1.0-5 () 2020-12-10T00:00:00+01:00: Starting datastore prune on store "pve-infra-onsite" 2020-12-10T00:00:00+01:00: task triggered by schedule 'daily' 2020-12-10T00:00:00+01:00: retention options: --keep-last 55 --keep-hourly 48 --keep-daily 7...
  20. R

    [SOLVED] Pruning seems not to prune: Keep last = 3, but more than 11 snapshots are kept

    Hi, this is my setup for the Prune & GC job: But even though running manually it does not remove older snapshots.... Is this probably a permission problem? Thanks Rainer

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!