Search results

  1. Windows VMs bluescreening with Proxmox 6.1

    So just a short update: I have managed to load an old kernel and running 5.0.21-5-pve. It has no problems. Actual kernel 5.3.18-2-pve produces for me continuous crashes and blue screens with all possible types of messages under WS2019 on two different nodes. Different CPUs, different systems...
  2. Windows VMs bluescreening with Proxmox 6.1

    Experiencing exactly the same on all WS2019 machines on 2 different nodes since the update yesterday :( Anybody has a clue how can I load the previous kernel on a headless machine?
  3. Proxmox VE 6.1 released!

    Buy subscription :) Or delete this: /etc/apt/sources.list.d/pve-enterprise.list And add this: deb http://download.proxmox.com/debian/pve buster pve-no-subscription to /etc/apt/sources.list
  4. [SOLVED] PVE 6.0/corosync over WAN (high latency) - looses sync

    Can confirm working again stable now :) Thanks for your assistance. And yes, cluster over WAN not recomended, but works :)
  5. [SOLVED] PVE 6.0/corosync over WAN (high latency) - looses sync

    Done. Running libknet1: 1.13-pve1 now. Will monitor and update now.
  6. [SOLVED] PVE 6.0/corosync over WAN (high latency) - looses sync

    Sure. Sorry. See below. By the way, as I saw this packages list I remembered that yesterday as I tried to solve the problem besides the above-mentioned change in corosync.conf I've also updated nodes with apt. To be sure I am not giving you the log where something is already fixed with updates...
  7. [SOLVED] PVE 6.0/corosync over WAN (high latency) - looses sync

    pve-manager/6.0-4/2a719255 (running kernel: 5.0.15-1-pve)
  8. [SOLVED] PVE 6.0/corosync over WAN (high latency) - looses sync

    Thanks for your attention. Meanwhile, I've altered corosync.conf and increased token value to 10000 as advised here. This change definitely didn't solve the problem completely (it still crashes), but subjectively it feels to happen not that often now (I might be wrong at this). I tried to...
  9. [SOLVED] PVE 6.0/corosync over WAN (high latency) - looses sync

    I am running PVE cluster over WAN (different datacenters across the globe). It worked all the time flawlessly and best suited my needs (of course no shared storage, LM or HA but still central management, easy offline migrations etc). Some time ago I've upgraded to PVE 6.0 and was able to run the...
  10. Cluster over high latency WAN?

    Same thing happening to me since upgrade to PVE 6. Sometimes (not that often though as you describe) I find my cluster as "broken" due to corosync issues. What I do on "disconnected" nodes is simply: killall -9 corosync systemctl restart pve-cluster Than it works flawlessly. It seems that...
  11. Getrenntes Backup Node – Storage Typ/System Überlegungen

    Ja, die Wege kenne ich. Borgbackup nutzte ich auch ziemlich oft für andere Zwecke. Meine Überlegung war, vielleicht gibt es was über GUI oder so "Standardmäßig" ;) Aber OKay, ich probiere einfach mit dockerized Samba. Für große Dateien, was VM-Backups sind, ist SMB auch übers WAN schnell genug.
  12. Getrenntes Backup Node – Storage Typ/System Überlegungen

    Es sind gemietete Server in verschiedenen Rechenzentren, mehrere in EU, paar in USA und in einigen anderen Ländern. Überhaupt nicht. Durchschnittlich liegen wir bei ca. 25-30ms, aber es geben auch einige Stellen wo wir schon bei 100ms sind. Problemen mit Cluster hatte ich nie gehabt bis jetzt...
  13. Getrenntes Backup Node – Storage Typ/System Überlegungen

    Hallo, Ich nutze PVE Cluster, jedoch ohne Shared-Storage, da sich die Knoten in verschiedenen Rechenzentren auf der ganzen Welt befinden. Meistens fungiert der Cluster als zentrale Verwaltungslösung und nützliches GUI-Tool für VM-nicht-Live-Migrationen. Ich plane die Installation eines...
  14. Separate backup node - Storage system considerations

    Hello, I am running PVE Cluster, but without any shared storage yet, since the nodes are located in different data centers around the globe. Mostly the cluster works as a central management solution and useful GUI tool for VM non-live migrations. I am planning to install a separate node just...
  15. Proxmox Firewall for NAT

    I just fixed using above-mentioned rule the problem of a firewall-protected VM behind the NAT... I wonder if it's still correct in 2019? Do I really have to put all those rules manually for any VM behind NAT being protected by a firewall?
  16. Firewall not working on a single NIC out of two?

    Thanks for the prompt reply. Well under current settings I am expecting all the incoming traffic to net1 of VM111 being dropped. This doesn't happen though: 10.10.20.111 - net0 PUBLIC_IP_NET1 - net1 [~]$ nmap -p 443 10.10.20.111 Starting Nmap 7.70 ( https://nmap.org ) at 2019-03-18 12:22 CET...
  17. Firewall not working on a single NIC out of two?

    My bad :) iptables-save # Generated by iptables-save v1.6.0 on Mon Mar 18 11:05:23 2019 *filter :INPUT ACCEPT [5:272] :FORWARD ACCEPT [2:80] :OUTPUT ACCEPT [0:0] :PVEFW-Drop - [0:0] :PVEFW-DropBroadcast - [0:0] :PVEFW-FORWARD - [0:0] :PVEFW-FWBR-IN - [0:0] :PVEFW-FWBR-OUT - [0:0] :PVEFW-HOST-IN...
  18. Firewall not working on a single NIC out of two?

    Since hours trying to figure out what's wrong... I have two NICs both connected to vmbr0 (one with private and one with public IP). Here is how it's configured in proxmox GUI: Which corresponds to this within the CT-VM: auto lo eth0 iface lo inet loopback iface eth0 inet dhcp #auto eth1...
  19. Best practice for ZIL/L2ARC with just 4xSSDs (yet another one :)

    Thanks for replying me in another thread :) Yes. This was my other thought.. I assume I'll get 4 identical "enterprise" SSDs... So you think a mirror + stripe and to place everything on a single pool is a better idea? And don't even bother adding separate SLOG/L2ARC caches but just leave the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!