Search results

  1. P

    Undefined Code: 1006

    Did you harden your ssh config in any way?
  2. P

    Question about datastore sync

    You could sync to a local datastore with more restrictive prune settings and sync this one to the external one.
  3. P

    Performance Maximum

    Wenn du mit lahm die Bedienung der Oberfläche meinst, dann ist ohne Grafikkarten das Ende der Fahnenstange eigentlich erreicht. :)
  4. P

    Performance Maximum

    Was läuft denn nicht? Die VMs müssen die grafische Oberfläche halt mangels durchgereichter Grafikkarte durch die CPU berechnen lassen. Das ist hinreichend ineffizient.
  5. P

    What is the correct VLAN mode on managed switch?

    Waah, sorry, now you got me. You already counted OPNSense as one router. Well, then you indeed only have two. In this light my answer with two WANs is of course incorrect.
  6. P

    What is the correct VLAN mode on managed switch?

    Well, together with OPNsense you'll have three routers, then. I don't want to say that's impossible to set up but my impression from your answers is that this beyond your capabilities for now. Start with one router and educate yourself about VLANs and how to set them up. This would make a lot of...
  7. P

    What is the correct VLAN mode on managed switch?

    Your two routers are basically two WAN connections for OPNSense, no need for a vlan here. If you connect clients directly to Fritzbox LAN, what do you need the opnsense for, then? LAN for OPNSense is just a name. It can be tagged as well.
  8. P

    No ping from PVE host to router (Fritzbox)

    If you don't need other VLANs on the cable: access mode If you need other VLANs on the cable: trunk mode and probably allow the correct bridge vids.
  9. P

    What is the correct VLAN mode on managed switch?

    Port based VLAN is basically the same as an access port in 802.1q, so 802.1q is what you need. If you want only one vlan on the cable -> access port (VLAN will be untagged). If you want more than one vlan on the cable -> trunk port (native vlan [or PVID] will be untagged, every other VLAN has to...
  10. P

    Proxmox OPNsense Netzwerk Config?

    Wenn du nur eine Netzwerkkarte hast, liegen die VLANs ja tagged auf dem Kabel. Wenn die Bridge vlan-aware ist, werden die VLANs alle einfach so weitergereicht, es sei denn, du gibst eine Tag ID an, wenn du die NIC der VM konfigurierst. Bei der Installation der Sense musst du dann halt die...
  11. P

    Performance Maximum

    CPU type kannst du auf „host“ setzen und normalerweise sollten 4 Kerne für Windows locker reichen. Die würde ich persönlich auch nicht über die Sockets verteilen, das dürfte hinsichtlich des Caches ungünstig sein. Ist aber nur meine laienhafte Vermutung. RAM soviel das System halt braucht...
  12. P

    slow NVME disk performance on ZFS

    I would consider this excellent performance values for inside a VM.
  13. P

    apparently corrupted disk image on Ceph pool - how can I fix this?

    If Ceph is healthy, the image in your cluster is in a consistent, yet obviously corrupted, state. You can mount a live system and try to solve the issue inside the guest or you can replace the disk with a backuped one.
  14. P

    Installation of Proxmox Back server on Proxmox VE hardware

    It's possible. Be aware though that if the hardware breaks you neither have your machines online nor your backup at hand. You don't need another system disk for pbs, just install it via APT.
  15. P

    Cluster moving VM's around

    With Ceph as backing storage you don't need replication. For a planned downtime I would always go for migrating VMs away manually. Otherwise you still have some downtime.
  16. P

    VLAN Bug im PVE 6.4?

    Das muss an deiner Switch-Konfiguration liegen. Wenn die Kommunikation innerhalb des VLANs zwischen zwei Nodes nicht funktioniert, kommt das VLAN nicht bei beiden Nodes an. Eine andere Lösung sehe ich nicht, an der Nummerierung liegt das mit Sicherheit nicht.
  17. P

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    There are already a lot of people that did the upgrade without any issues so yes, finding the cause would be highly interesting.
  18. P

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Maybe they went nearfull because of the rebalancing, true. I would still contact the Ceph experts through the respective channels. Those are a lot more experienced with disaster recovery than the Proxmox forum (no offense [at all!]).
  19. P

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Everyone who had problems with the upgrade posted logs with full or near full pools/OSDs. This is a situation which should be avoided at all costs. The Ceph mailing list ia probably the best addressee for such problems. And as the saying goes: no backup - no pity.
  20. P

    Enquiry on Network Architecture

    If all NICs are connected to one switch, there is no need for a second Corosync ring. So, option no. 2 is the way to go, although you waste a totally potent 10G link ... Corosync should be unaffected by high loads, so over VLAN is only possible if you can guarantee highest priority to Corosync...