Search results

  1. S

    Full mesh as failover cluster network

    sorry, I didn't get it yet. ens18 is a physical connection to node 2. Node 3 can't communicate with node 1 via this connection. Do I overthink this? Should I just set the first mesh interface as the failover link for the cluster? edit: ah, at the end only the IP address matters for the cluster...
  2. S

    Full mesh as failover cluster network

    Thanks for your quick reply! I didn't create the cluster yet. Full mesh (at least in my case) means that there is one link to "node 2" and one link to "node 3", both with the same ip address, like this: # Connected to Node2 (.51) auto ens18 iface ens18 inet static address 10.15.15.50...
  3. S

    Full mesh as failover cluster network

    Hi there, I'm building a new 3 node PVE cluster. I already have a full mesh prepared for live migrations, Is it possible to use this full mesh as a failover network for the cluster itself? In the WebUI at "Create Cluster" it doesn't seem to be possible to add both mesh links. Thanks and greets!
  4. S

    How to wipe a disk which was OSD before?

    sdX must be replaced by the "name" of your block device.
  5. S

    Error found when starting Update from 7 to 8

    maybe this can help: -> Proxmox VE Tools -> Proxmox VE Kernel Clean Can somebody confirm that it is safe to run this script?
  6. S

    Error found when starting Update from 7 to 8

    Ok, here it is, the no-subscription repo. I think you can remove /etc/apt/sources.list.d/*, because it's not needed. Does not solve your problem, but make things clearer. Does apt update show something unusual?
  7. S

    proxmox unable to connect to internet following first restart

    looks good so far. Maybe you could also post the output of lspci to check if there are unexpected changes after the next restart.
  8. S

    Umstieg von VMware, Entscheidungshilfe pro/con CEPH

    Das kann ich bestätigen. Mit PVE/Ceph und einem Proxmox Backup Server verschwimmt die Grenze zwischen Snapshot und Backup. Vor allem kleinere VMs, auf denen nicht allzuviel Veränderung stattfindet, erzeugen ein Backup in wenigen Sekunden.
  9. S

    Error found when starting Update from 7 to 8

    ok, so the errors occur while you tried to update to the current PVE 7. That's fine, but then your topic is a little bit misleading. so far I can't see any no-subscription repo. To be sure, please also post the output of cat /etc/apt/sources.list. can you also post a screenshot of the webGUI...
  10. S

    proxmox unable to connect to internet following first restart

    Welcome to the Proxmox community! :) to be sure: You have an onboard NIC and added some PCI card with a further NIC? Can you login at the console? If yes, the output of ip addr would be interesting.
  11. S

    Error found when starting Update from 7 to 8

    What exactly did you do that lead to this output? Did you follow these instructions? what PVE repo do you use? please post the output of ls -al /etc/apt/sources.list.d/ cat /etc/apt/sources.list.d/*
  12. S

    hardware renewal for three node PVE/Ceph cluster

    yes, there is dedicated BMC which I didn't mention.
  13. S

    hardware renewal for three node PVE/Ceph cluster

    yes, you can: https://pve.proxmox.com/wiki/Manual:_datacenter.cfg At the moment our live migrations use the switches and so the (cross-room) connections between the switches. These are 2x 10G and can easily be saturated by live migrations. That leads to higher latency between the switches, which...
  14. S

    hardware renewal for three node PVE/Ceph cluster

    I still didn't get the point. I try to clarify the network setup: these are connections to our switches: 1x 10G management (PVE WebGUI) 1x 10G Corosync 2x 25G VM network (bonded) and these are direct connections, no switches involved: 4x 25G for Ceph "bonded full mesh" 2x 25G VM Migration...
  15. S

    hardware renewal for three node PVE/Ceph cluster

    Yes, it is. What exactly do you mean by this?
  16. S

    hardware renewal for three node PVE/Ceph cluster

    Yes, we already took this into account. Networking is planned like this: available: 2x 10G onboard 10x 25G Broadcom cards usage: 1x 10G management (PVE WebGUI) 1x 10G Corosync 4x 25G for Ceph "bonded full mesh" 2x 25G VM network (bonded) 2x 25G VM Migration (full mesh) 1x 25G Backup
  17. S

    hardware renewal for three node PVE/Ceph cluster

    any other opinions? (first and last bump :D)
  18. S

    hardware renewal for three node PVE/Ceph cluster

    thanks for this - as far as I understand, you're right: When an OSD fails, the pool becomes degraded, which means: Pool is working, but no more redundancy left. But I think that's ok for us. Three times the usable storage must be enough. :D
  19. S

    hardware renewal for three node PVE/Ceph cluster

    Hi, after almost five years with our three node PVE/Ceph cluster now it's hardware renewal time! Core requirements are: - about 24 TB of usable storage (fast and scalable) - about 512 GB RAM per node (scalable) Unfortunately we can't go with AMD EPYC CPUs because of Oracle. Together with...
  20. S

    Problem with proxmox version 5.4-3

    If I were in this situation I wouldn't waste time trying to upgrade such an outdated installation. I would try to backup the VMs, make a new installation of the whole environment and then restore the VMs.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!