Search results

  1. weehooey-bh

    Setting up multiple networks between PVE

    Please post the contents of your /etc/network/interfaces file. This will provide a clear picture of what you have right now.
  2. weehooey-bh

    Setting up multiple networks between PVE

    To confirm, you only have one physical network interface on the server? Getting at least one additional network port would make your life easier if you can. Do you have a managed switch that supports VLANs? Do you have administrative access to that switch? If so, the best solution would be to...
  3. weehooey-bh

    Backup server in proxmox VM

    Ah. Okay. Then, put PVE on the G9 and make PBS a VM. You will likely outgrow that setup, but it will get you started. I would still look at ZFS on the G9. It will give you flexibility to or change.
  4. weehooey-bh

    extremely low download speed

    I suspect you are having more issues than just speed, and I am surprised that the configuration works. IMPORTANT: I have not tested the configuration below and only know what you have posted. Make sure you understand it and its implications. If you use it be sure to be able to roll back if...
  5. weehooey-bh

    Backup server in proxmox VM

    Yes. From the information you have provided, you will have one of two scenarios: #1 Light VM load If your cluster has a light load (e.g., a few VMs, light compute/storage resource usage), you have lots of room for your DNS server, Windows, etc. The DNS, in particular, would be good on a...
  6. weehooey-bh

    Backup server in proxmox VM

    With those resources, if you are planning to use most for VMs and LXCs you will need more storage on the PBS. Without knowing more, I would plan to dedicate the G9 to PBS and get more storage. With (4 x 1.92 TB) x 3 in Ceph you will have roughly 7.68TB of usable storage and at 50% you will have...
  7. weehooey-bh

    Pfsense VM WAN Interface Setup on Proxmox HA Cluster

    If configured correctly, hosting pfSense in PVE can be safe to use. Since your PVE hosts do not have an IP address on vmbr1 they will not have access to it, and therefore, traffic on vmbr1 will not have access to the PVE hosts. And since no VMs are on vmbr1, they are not accessible either.
  8. weehooey-bh

    extremely low download speed

    You have not provided enough information for someone to help you. What have you tried? What is the configuration of your MikroTik VM? Please share the contents of your /etc/network/interfaces file.
  9. weehooey-bh

    Backup server in proxmox VM

    You did not mention the VM storage being backed up or any details about workloads on your production cluster, so determining the resources needed for the Proxmox Backup Server (PBS) would be a guess. You can run PBS in PVE as a VM, which seems like what you are thinking about doing. If you do...
  10. weehooey-bh

    "Proxmox ACME DNS not working in 8.2.2:

    Thanks for trying this (love the change text :) ). We now know this is where it is failing. Looking at the command, it is looking for "BEGIN PRIVATE KEY" and you tested adding that to the contents of the file. This suggests that the private key file is not making it here or is being...
  11. weehooey-bh

    "Proxmox ACME DNS not working in 8.2.2:

    You are welcome for the help. Yes, your encoding looks as I would expect. It may be using a different encoding, although I may be wrong about where the error is coming from. Please only take this next step if you are comfortable editing code. You will be messing with the guts of PVE. Also, if...
  12. weehooey-bh

    "Proxmox ACME DNS not working in 8.2.2:

    I have dug through the source code of both Proxmox VE and acme.sh. The error you are getting is from this file: /usr/share/proxmox-acme/dnsapi/dns_transip.sh In particular, this part: if [ -f "$TRANSIP_Key_File" ]; then if ! grep "BEGIN PRIVATE KEY" "$TRANSIP_Key_File" >/dev/null 2>&1...
  13. weehooey-bh

    "Proxmox ACME DNS not working in 8.2.2:

    Does your key file start with the text "BEGIN PRIVATE KEY"? If it does not, add it and try again. Note: This is case-sensitive.
  14. weehooey-bh

    pbs pull

    Yes, the remote PBS needs to access port 8007. It would be a good idea to only allow traffic to port 8007 from the IP address where you have the remote PBS. You cannot limit what API functions are available via port 8007, but you can limit what the API token can do. I usually create a user...
  15. weehooey-bh

    backup data of my VM NFS

    Yes, you can install PBS into a PVE host (i.e. side-by-side). Or, you can run PBS as a VM in PVE. Proxmox Backup Server - Installation
  16. weehooey-bh

    backup data of my VM NFS

    This should be a new post. Your original post was about network connectivity. This is about backups. No, PVE backups are for the VMs and LXCs. You might want to run a Proxmox Backup Server as a guest on Host 2. Alternatively, use something like rclone as a cronjob to sync the data.
  17. weehooey-bh

    backup data of my VM NFS

    Is this a layer 3 switch? Getting from 192.168.1.0/24 to 192.168.20.0/24 needs a router. There are a lot of options to do backups depending on what you want to achieve. There is a backup built into PVE. You can backup the whole VM.
  18. weehooey-bh

    backup data of my VM NFS

    To share data between VMs does not require it to be added as storage in PVE. If the VMs are on the same VLAN, you can just share it directly as if they were physical servers on the same subnet.
  19. weehooey-bh

    backup data of my VM NFS

    It is okay to have NFS on a VM. I am not sure what you want to achieve with NFS on the VM and adding it as PVE storage. The VM's storage is on the PVE host. I would assume the storage drives in your PVE host. You then share that storage back with the PVE host. Once you have shared it back...
  20. weehooey-bh

    backup data of my VM NFS

    I will not ask why you want to mount an NFS share of a VM in PVE... :) A bit more information would be helpful. What is the IP address and subnet of your PVE node? What is the IP address and subnet of your VM? Please run this command on your node and post the output: cat...