Search results

  1. P

    [SOLVED] LACP bond issues with native + tagged VLANs

    Hm, not so bad actually. Did you try disable the pruning? Maybe that's where the hiccups come from. On a second glance, your vlans are marked as not pruned ... Please still post a config of one of the VM's. Either the conf file or a screenshot.
  2. P

    [SOLVED] LACP bond issues with native + tagged VLANs

    Please post the /etc/network/interfaces and a screenshot of the network section of one of the non-functioning VMs as well as the vlan config of your switch. From what I saw here it should work if everything else is configured properly.
  3. P

    [SOLVED] LACP bond issues with native + tagged VLANs

    That's what I'd suggest, yes. If the vlans still don't work the error has to be somewhere else. Maybe try without the last line first, thus enabling all vlan ids. I don't know exactly what the idea behind the pvid is. Maybe only readability, or some fancy asymmetric routing, I don't know.
  4. P

    [SOLVED] LACP bond issues with native + tagged VLANs

    First of all, the tagged interfaces are completely irrelevant for VM communication. If your host doesn't need an address in the respective vlan, you can delete the subinterfaces. For vlan tags to work on vm level, the bridge has to be vlan-aware. So in your first try it doesn't work because you...
  5. P

    Strong Datacenter using Proxmox

    I would look for some decent SSDs to begin with. VM images on HDDs can be gruesome. But that of course depends on your workload. You can indeed have one main server which replicates to the second and have a third one for backups. You then need the RAID controllers to work in IT mode and remember...
  6. P

    Strong Datacenter using Proxmox

    Zfs is superior, full stop. But it needs - to benefit most of it - access to the disks. So you either use a HBA or you flash your RAID controller to IT mode. Your setup is not HA in the usual sense. HA starts with three nodes, which would also be possible in your case. Remember that you cannot...
  7. P

    Give Proxmox host internet access from a virtual machine (macos) passthroughed wifi card

    Well you'll have to configure your mac VM as a router, i.e. enable packet forwarding. The host and this VM need to share a bridge and then you put the VM's IP address as gateway in the networking section of your proxmox host.
  8. P

    VM starten nach Neustart in die UEFI-Shell

    Ich klick da einfach öfter drauf, irgendwann hat er normalerweise alles beieinander.
  9. P

    VM starten nach Neustart in die UEFI-Shell

    Für Windows bin ich der falsche Ansprechpartner. :-D Ich würde das funktionierende Backup einspielen und das Update dann mal auf der Konsole nachverfolgen.
  10. P

    Hypervisor Festplatte defekt und nun?

    Du kannst Dateien mit dem proxmox-backup-client auf eine PBS-Instanz sichern.
  11. P

    Hypervisor Festplatte defekt und nun?

    Das kommt drauf an, wo du sie hingelegt hast. :-) Je nach darunterliegendem Storage sieht die Sicherung ein wenig anders aus.
  12. P

    VM starten nach Neustart in die UEFI-Shell

    Klingt für mich als hätte es deine Windows-Platte zerschossen. Wird vermutlich mit dem ungewollten Neustart zusammenhängen. Platte vielleicht voll? Oder irgendwelche obskuren Updates gemacht?
  13. P

    Wieviel RAM brauche ich?

    Alte Regel: RAM ist durch nichts zu ersetzen! Außer durch mehr RAM.
  14. P

    Upgrade 6 to 7 - Bootloader Switch Problem

    Steht doch da. Macht er net, weil's Probleme geben wird. Mit Proxmox auf USB-Sticks wirst du noch weniger Spaß haben als mit dem Warten auf den Hardwaretausch.
  15. P

    [TUTORIAL] Proxmox 7 mit CentOS 7 und Ubuntu 16.04LTS

    Ubuntu 16.04. ist im April 2021 aus dem Support gegangen. Wer das Geld für einen ESM-Vertrag hat, hat auch Geld für eine Subscription mit Support. Der löst einem das dann evtl. sogar direkt im System. Und wo steht geschrieben, dass jedes Gast-System die Versionssprünge des Hypervisors uberleben...
  16. P

    Shared storage across nodes not working - See comment #56.

    Is ssh from every node to every other node working? I also have the impression that this is not really a storage issue.
  17. P

    Best Practices for multiple NICs

    Here's what I did with the same number of nics: 1x 1GbE for Corosync alone 2x 10GbE form an lacp bond Said bond and 1x 1GbE form another active-backup bond which serves as network device for vmbr0. All vlans come in tagged over vmbr0 and have qos flags set. This way you can prioritize ceph...
  18. P

    Guidance on Shared Storage

    I wouldn't mix ceph and gluster and, to be honest, I also wouldn't ditch ceph. Of course the HDDs are the problem, but why do you think another cluster file system would make this any better? Maybe you can start the replacement with 4 SSDs per node and puth a separate pool on them for the io...
  19. P

    [ShotInTheFoot] cloud-init package on the host

    While that is surely true, why would anyone want to kickstart a hypervisor with cloud-init? ;-)