Search results

  1. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    I am currently looking into the NFS Ganesha keepalived active/passive two VMs path. Adding additional cephx client authorizations on the Proxmox VE Ceph storage does not void the enterprise support, right?
  2. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    @alexskysilk , by using a NFS based readonly image I just create a DHCP entry and boot directly via the network from the NFS server. Maybe I made myself not properly clear on how our current setup works. https://ltsp.org/ is a project where they use that concept for clients. We use such a setup...
  3. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Hi @Alwin, we are currently still running all of our Debian Linux VMs as PXE-booted diskless NFS-Root machines. We have all applications (in a disabled state) installed into one image, create a snapshot and assign that readonly snapshot using DHCP to the VMs. Based on the hostname a config file...
  4. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Hi @samontetro , so what happens if you want to reboot the CentOS7 VM? - Do your NFS clients stall during that time? - Do your NFS clients just reconnect? From my point of view you have a Single Point of Failure with that single VM. Thanks for your message though. Rainer
  5. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Hi, we are migrating from a VMware ESXi setup with a NetApp NFS based shared storage. We also did use NFS filesystems for mounts like /home or /root and application filesystems like a shared /var/www within our virtual machines and host-specific filesystems like /var/log. Most of our...
  6. R

    [SOLVED] Pruning seems not to prune: Keep last = 3, but more than 11 snapshots are kept

    I just edited the config again in the WebUI and saved it. So it might have looked differently before: root@pbs02:~# cat /etc/proxmox-backup/datastore.cfg datastore: pve-infra-onsite comment PVE Cluster infra - On-Site backup for quick restores only gc-schedule 00:00...
  7. R

    [SOLVED] Pruning seems not to prune: Keep last = 3, but more than 11 snapshots are kept

    This is strange... Proxmox Backup Server 1.0-5 () 2020-12-10T00:00:00+01:00: Starting datastore prune on store "pve-infra-onsite" 2020-12-10T00:00:00+01:00: task triggered by schedule 'daily' 2020-12-10T00:00:00+01:00: retention options: --keep-last 55 --keep-hourly 48 --keep-daily 7...
  8. R

    [SOLVED] Pruning seems not to prune: Keep last = 3, but more than 11 snapshots are kept

    Hi, this is my setup for the Prune & GC job: But even though running manually it does not remove older snapshots.... Is this probably a permission problem? Thanks Rainer
  9. R

    sync group vm failed: error trying to connect: error connecting - tcp connect error: deadline has elapsed

    Ok, so it would have been clever to just let the tcpdump run and wait for the first tcp connect error... () 2020-12-10T22:00:00+01:00: Starting datastore sync job 'pbs02:pve-infra-onsite:pve-infra:s-920ce45c-1279' 2020-12-10T22:00:00+01:00: task triggered by schedule 'mon..fri 22:00'...
  10. R

    sync group vm failed: error trying to connect: error connecting - tcp connect error: deadline has elapsed

    The IPsec Tunnel is always up and both firewalls are well below their maximum session limit. I have seen the 10sec wait time between the error und the next action. Is there something like a retry setting I could use? I could run a ping/date job during sync job time to rule out network problems.
  11. R

    sync group vm failed: error trying to connect: error connecting - tcp connect error: deadline has elapsed

    pbs01 -> 10G -> Firewall (IPSec Tunnel) -> 1G -> Internet -> 1G -> Firewall (IPsec Tunnel End) -> 40G -> pbs02 root@pbs01:~# ping -M do -s 1472 10.33.40.12 PING 10.33.40.12 (10.33.40.12) 1472(1500) bytes of data. 1480 bytes from 10.33.40.12: icmp_seq=1 ttl=62 time=8.37 ms 1480 bytes from...
  12. R

    sync group vm failed: error trying to connect: error connecting - tcp connect error: deadline has elapsed

    Ok, had a look at Administration -> Tasks and the logs there. The error occurs with different VMs. We only have KVM/QEMU VMs, no CTs.
  13. R

    sync group vm failed: error trying to connect: error connecting - tcp connect error: deadline has elapsed

    Hi, playing around with PBS v1.0-5 using a sync job against a remote second installation with the same version. Versions: proxmox-backup: 1.0-4 (running kernel: 5.4.78-1-pve) proxmox-backup-server: 1.0.5-1 (running version: 1.0.5) pve-kernel-5.4: 6.3-3 pve-kernel-helper: 6.3-3...
  14. R

    Feature Request: Sync transfer speed limitation based on timeslots

    We are currently migrating from VMware/NetApp with Veeam to Proxmox VE with Proxmox BS. This is our use case: - Site 1 has a 4HE server with lots of spinning disks and ZFS as PBS1 data sink and PVE cluster 1. - Site 2 has PVE clusters 2 and 3 with a PBS2 installation on a QEMU VM which holds...
  15. R

    How to shrink ZFS within VM?

    So which Filesystem do you recommend within the VM? How to prevent bit rotting - should that not be done on the top level within the VM?
  16. R

    How to shrink ZFS within VM?

    @wolfgang , so what would you recommend, if you want ZFS within the VM to use snapshots on a regular basis like 15 minutes? Is ZFS within the VM on top of CephFS ok?
  17. R

    Proxmox Backup Server 1.0 (stable)

    As a private home user I want a backup but am o.k. with minor bugs <- so no money, no enterprise repository... For the company I want a working, stable solution with manufacturer support. Test system using a community license, Production system Basic and up...
  18. R

    Proxmox Backup Server 1.0 (stable)

    I like your release date and time... ...it's a nice birthday present for me...
  19. R

    Windows 7/2008R2 driver support

    I used https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.141-1/ for the installation back then...
  20. R

    150mb/sec on a NVMe x3 ceph pool

    With a 3 node Cluster invest into 100GBit Dualport network cards and use DAC cables to make a point2point connection. Then use this https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server to set it up