Search results

  1. V

    Safe to port-forward the management interface

    Hello, Sorry if this question has already been A&A, i tried to search but did not find anything. I just recently put a PMG into use, inside a DMZ network. Only using this for inbound and so far it works really well. My users get the daily spam summary, including the links to either white-/ or...
  2. V

    Pve 7.2, Ceph Quincy missing dependencies

    Hi all, Upgraded all nodes to Pve 7.2, wanted to deploy a ceph cluster. I set up the cluster from the UI with version "17.2.0 Quincy". Installed managers and monitors on all 3 nodes, now adding an osd returns "binary not installed: /usr/sbin/ceph-volume". I noticed that the repository on the...
  3. V

    Prune seems to not work for entire Datastore

    Hi all, I love PBS, works so well. The only thing I am dealing with now is that the Prune options for Datastores does not seem to work. I have set a prune setting to keep the last 7 backups. However, when I go to the datastore I see VMs with 10 copies (backup runs daily). Prune + GC job is set...
  4. V

    Ubuntu 20.04 - Cloud-Init only runs with full clone, not linked clone

    Been working on this for over a day now. I am getting started with linked clones and cloud-init and I am having an issue with the first boot of linked clones. The cloud-init simply doesn't run at all if the vm is a linked clone. After the clone, I make sure to change the IP and such, then press...
  5. V

    Proxmox Ceph - 10K SAS vs Entry-level SSD

    Hi all, I am currently using Ceph with the following constellation: - 2x DL360 Gen9, 1x DL380 Gen9 - 1x Xeon E5-2690 v3 - 128GB DDR4 ECC (2133 mhz) - 5 x Intel S4510 SSD as OSD - 4x10Gbps Uplink with LACP So, 3 nodes in total right now with a total of 15 OSD's. I have been handed an "old"...
  6. V

    Ceph goes down after adding second boot disk for mirror

    Hi all, I have a 3 node ceph cluster, consisting of HPE Gen9 servers. It's been running well since I set it up and I really enjoy the "no single point of failure" feature. Now, during the installation I was using some S3700 100gb drives for boot zfs mirrors, however for one of the hosts, I...
  7. V

    3 node ceph concept consideration

    Hi all, I am currently considering to do a 3-node ceph concept containing 3 of the following hosts: - Xeon E5-2690v3 - 128gb 2133mhz DDR4 ecc memory - 1.6TB P3600 NVMe (ceph storage) - 4x10GbE networking - optionally 6x 2.5” SSD’s (1 for bootdrive) I can expand the memory to 256gb and add...
  8. V

    [SOLVED] Corosync wrong bindnetaddr

    Hi all, I have a proxmox cluster consisting of 2 hosts (pmx01, pmx02). Cluster name is cluster01 and consisted of 2 other hosts which are now gone (pve01, pve02). Now, so far most things seems to be working right (except for some qlge driver port flapping) but I have noticed some recent...
  9. V

    Server 2016 Essentials Installation fails on Qemu

    Hi all, I have recently setup a PVE 5.2-1 cluster of 2 hosts. I use a central NFS share to host the VM files. I wanted to install a new Server 2016 Essentials VM to replace our aging 2012 VM. I am using the 1705 version ISO of 2016 to install essentials. For storage controller I have selected...
  10. V

    Ceph hardware question

    Hi all, I am looking to setup 3 proxmox hosts and start hosting cloud services. I have a Dell C6220 box with 3 nodes of: 2x Xeon E5-2680 256gb quad rank 800mhz memory Dual port 10gbe nic In each node Inhave space for 4x 3,5” Drives and a single PCIe 3.0 low profile slot. Now which drives...