Search results

  1. U

    Cluster-Node startet unerwartet neu

    Hi, wenn ich SuperMicro lese würde ich es mit einem Bios-Update versuchen. Vor längerem hatte ich auich mal eine SuperMicro-Büchse, die unregelmässig neu startete - nach einem Bios-Update war die Kiste mit einem mal sehr stabil. Wenn nicht gerade Dein IO gestört ist, solltest Du sonst wa im Log...
  2. U

    Fehlermeldung nachdem mir die mit Proxmox 5.4-13 SSD zu 100% (SSD) vollgelaufen ist.

    Hi, wie sieht denn die Ausgabe folgender Befehle aus? zpool status zpool get all rpool zfs list -o name,used,refer,volsize,volblocksize,written -r rpool zfs list -t snapshot Udo
  3. U

    Proxmox legt alle VM‘s lahm

    Hi, das lässt sich nur dadurch erklären, dass Du die Clustercommunikatin des Produktions-Cluster gestört hast (wie andere schon vermutet haben. durch gleiche IP?). Wenn eine Node kein Quorum hat, fährt sie alle VMs runter in der Annahme, dass sie auf den verbleibenen Nodes (mit Quorum) neu...
  4. U

    max cluster nodes with pve6?

    Hi, how look the output of pvecm nodes pvecm status ? Have you tried to restart corosync on all nodes (one after one)? systemctl restart corosync Udo
  5. U

    Can't access GUI webpage

    Hi, what is the output of ip a s on proxmox1? And what do you type exactly in the webbrowser? What kind of browser? Udo
  6. U

    max cluster nodes with pve6?

    Hi, I've an 16 node cluster with corosync3 running (7 nodes still on pve5.4) without trouble. Corosync is running on 1GB - but with an second path (ring 1) on 10GB (1 GB on 2 nodes). Any errors in the logs? Can you use an second path? All nodes in one DC with small latencies? Udo
  7. U

    Proxmox search is really slow

    ok, must be something different... Take "pvesh ls nodes/node-name/qemu" on a node esp. a long time? Is "ls pools" fast? Any trouble with the cluster communication? Udo
  8. U

    Proxmox search is really slow

    Hi, any trouble with an defined storage? Is the output of pvesm status fast, and all storage ok? Udo
  9. U

    [SOLVED] Console-Access (Authentication failed) during cluster upgrade

    Hi, if anybody has the same effekt and don't like "AcceptEnv LANG LC_*", I've add "LC_PVE*" to our LC_variables. Udo
  10. U

    [SOLVED] Console-Access (Authentication failed) during cluster upgrade

    Hi Dominik, good tip! Due the puppet-installation I've a lot of AcceptEnv LC_settings but not LC_* With "AcceptEnv LANG LC_*" it's work again. Udo
  11. U

    [SOLVED] Console-Access (Authentication failed) during cluster upgrade

    Hi, I'm have two cluster where I upgrade the nodes step by step. If I try to open an console from an VM on another clusternode (not where I logged in), I got an Authentication failed. The console is working fine on nodes where pve5 is running. Logged in -> open console on pve5 -> pve5 =...
  12. U

    Meltdown and Spectre Linux Kernel fixes

    Hi, supermicro and bios-updates is pain in the ass. After some supermicro-server I try to use other server mainboards (Asus/ASRock). Udo
  13. U

    Proxmox VE 6.0.9 with SAN FC Storage

    Hi, LnxBil is right, but be also sure, that you use plain lvm and not an thin pool. Post the output of pvs vgs lvs Udo
  14. U

    Poor performance

    Hi, what kind of SSD do you use? XFS-Mirror sounds not for an standard installation... What is the output of pveperf -v Udo
  15. U

    Hypervisor storage (RAID-Z2) voll gelaufen - Löschen von VMs / Snapshots nicht mehr möglich

    Hi, vielleicht weil Du ihr mehr zugewiesen hast? mit zfs kannst Du over provisioning machen - also mehr zuweisen, als Du überhaupt hast, weil nur das belegt wird, was auch in Benutzung ist (zusätzlich kompression). Udo
  16. U

    Ceph purge leave some traces behind, can't reconfigure cluster

    Hi, is the ceph-pool still declared in /etc/pve/storage.cfg? any ceph-files left? find /etc -ls | grep ceph * some files are normal: find /etc -ls | grep ceph 190028 6 -rw-r--r-- 1 root root 159 Jan 30 2019 /etc/default/ceph 10097 1 drwxr-xr-x 2 root...
  17. U

    Some Newbie questions....!

    Hi Michael, depends on the SSDs. ZFS on Linux is very flexible, but not really fast... with SSD-only raids it's ok, but with HDDs it's depends on your IO-workload. An SSD or NVMe for journaling and cache can be very helpfull. But use the right SSD (enterprise - look, which ssd is usable for...
  18. U

    Getrenntes Backup Node – Storage Typ/System Überlegungen

    Hi, das ist aber nur bedingt ein Backup... Aber Du könntest znapzend von Tobias Oetiker verwenden https://github.com/oetiker/znapzend Damit kannst Du ein Plan erstellen, welche snapshots auf dem Ziel existieren sollen (nicht auf der Quelle). Damit benötigst Du nur (massenhaft) Platz auf dem...
  19. U

    Trouble with bnx2 after upgrade to pve 6 due config issue inside VM

    Hi Spirit, I use OVS for network and it was an double vlan tag. For safety I will try the bnx-option. Udo
  20. U

    Trouble with bnx2 after upgrade to pve 6 due config issue inside VM

    Hi Spirit, since that I'm aware not to use vlan tagging on an tagged vm-nic - so the issue don't occour (but if fixed it's will be much better!). Udo

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!