Search results

  1. weehooey-bh

    Error 500

    @Kevin José connect a monitor to node0 and log into it. Once logged in, use the command ip a and it will list all your IP addresses.
  2. weehooey-bh

    [SOLVED] Proxmox multiple interfaces without bonding in one vmbr useable?

    Interesting. Once you dig a little further, please update. If a ping works and an HTTP connection starts but fails, I would focus on the OPNsense VM and work from there. Stateful firewalls can behave that way with asymmetric routing — not necessarily your issue. For testing, I would remove...
  3. weehooey-bh

    [SOLVED] Proxmox multiple interfaces without bonding in one vmbr useable?

    Thanks for sharing the config and the side answer :) Can you tell me more about the testing and what you see when things fail? Have you tested with two computers as devices on eno3, eno4 and eno5? Or just with a computer and one of the APs? If only connecting one AP, are your tests...
  4. weehooey-bh

    [SOLVED] Proxmox multiple interfaces without bonding in one vmbr useable?

    Hi, fundamentally, what you are trying to do is possible. Please post the configuration for your OPNsense VM. You will find it here: /etc/pve/nodes/<node>/qemu-server/ Side question: Is there a reason you have your PVE node with IP addresses on both bridges?
  5. weehooey-bh

    Proxmox Ceph Networking

    Without looking at your current /etc/pve/ceph.conf It would appear you need to remove the monitor: mon.pmox02-scan-hq Here is a guide on how to do it manually: https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/ After you remove that monitor, check Ceph's health and make sure...
  6. weehooey-bh

    Proxmox Ceph Networking

    Scott Looks like something changed. Is pmox02-scan-hq offline? Please post the current version of /etc/pve/ceph.conf Please also post the output of these commands: ceph mon stat ceph config show mon.pmox03 ceph config show mon.pmox01-scan-hq ceph config show mon.pmox02-scan-hq ceph config...
  7. weehooey-bh

    HD Full

    I have taken a deeper look at the configs you posted. Your /etc/pve/storage.cfg has the following: cifs: frigate path /mnt/pve/frigate server 192.168.2.90 share frigate content iso,images preallocation off prune-backups keep-all=1...
  8. weehooey-bh

    Proxmox Ceph Networking

    Cool. Did you have these powered off for a bit? Did ceph -s change after being online for a bit? Where are you at now?
  9. weehooey-bh

    HD Full

    Please share the config for the Frigate LXC.
  10. weehooey-bh

    HD Full

    Let me see if I understand correctly: You were running TrueNAS Scale as a VM in PVE. The TrueNAS Scale VM had a CIFS share called frigate. You mounted the frigate CIFS share in PVE. What was writing to the frigate CIFS share?
  11. weehooey-bh

    HD Full

    If the share is defined in the web GUI (or CLI) using a specific storage type (e.g. CIFS or NFS), Proxmox VE looks after it. It has more information and control over the share. If you use /etc/fstab and the Directory type storage, you need to manage whether it is online or not. Proxmox VE only...
  12. weehooey-bh

    Proxmox Ceph Networking

    Hey Scott You will often see a clock skew after a reboot. If you are running PVE 7.x or earlier, check to see if you are running chrony it is much better than the older NTP package. Did you have these powered off for a bit? If you did, leave them for a bit to see if they get themselves...
  13. weehooey-bh

    Proxmox Ceph Networking

    Hey Scott I have not used a full mesh for Ceph. They have always been switched (physical external) networks. But, if the networking is good, Ceph should not care. Now that you have connectivity on the Ceph networks, does ceph -s give you any output? Or are all your monitors gone?
  14. weehooey-bh

    Proxmox Ceph Networking

    Thanks for posting your ceph.conf. I think you will have trouble routing fe80::/64. Since I am a fan of IPv6, let's keep it IPv6. Please change your cluster network to something in fd00::/8 (the usable half of fc00::/7). It should be a random prefix, but something like what you are doing...
  15. weehooey-bh

    Proxmox Ceph Networking

    Hey Scott I do not know of any guide for re-installation. There might be. This might give you what you need to remove Ceph: https://forum.proxmox.com/threads/removing-ceph-completely.62818/ If you are just setting up your cluster, starting fresh is a solid way to go. If you want to try to...
  16. weehooey-bh

    HD Full

    Hi Jens Thanks for posting the last bit of information. So, you are all fixed up now? Or, at least know what is taking up your space on that node?
  17. weehooey-bh

    Error 500

    In your /etc/hosts file, there is an inconsistency. 192.168.102.2 node0.localdomain node2 This should be: 192.168.102.2 node2.localdomain node2 Did you rename your nodes at some point? You will also want to check your /etc/hosts on node0. It should be something like: 192.168.102.X...
  18. weehooey-bh

    HD Full

    Please post the following items: The contents of /etc/pve/storage.cfg The output of find /mnt -maxdepth 3 -type d -ls Where were you looking for /mnt/synology? Were you looking in /etc/fstab, /mnt, or somewhere else?
  19. weehooey-bh

    HD Full

    Were you connecting to your Synology with NFS? I suspect that this is the issue. I have seen cases where a connection to an NFS share is lost, and the data is written locally. If you remove the NFS share and re-add it, you may get a message indicating that you cannot because the directory has...
  20. weehooey-bh

    HD Full

    Sorry, I can see it isn’t NFS. Do you have any NFS storage?