Search results

  1. G

    Does proxmox's kernel have support for Epyc Rome? (7002)

    @risho asked about compatibility with the second gen EPYC cpus, the 7002 series. We too have a cluster of 3 nodes with EPYC1 and runs perfectly fine (2x HP 325G10, 1x HP 385 G10). As i see it, the support for EPYC second gen specific features started to be included in the kernel in versions...
  2. G

    Update best practices

    The subscription repos have authentication, thet cannot be accessed by anyone. Also subscriptions need internet connectivity for activation.
  3. G

    Update best practices

    If you have a subscription, i don't know how you will do it as you have to authenticate the repo. Edit: the updates are for proxmox from their repo and for debian itself from the debian repos. You need all of them. And speaking of threat models, what is the typical issue you think could happen...
  4. G

    Update best practices

    If you have a license it will fail to work. You NEED outgoing internet connection for these updates. You don't need to expose it to internet directly, just allow it to initiate outgoing connections.
  5. G

    HA error state on some VMs (due to pve-cluster service crash?)

    So we wait for the pve-cluster package to be updated?
  6. G

    HA error state on some VMs (due to pve-cluster service crash?)

    That means killing the VM as far as i know, but the VM works well, it has absolutely no issues other than the cluster thinking it has issues. But the VM works well, it can be managed, migrated, etc. I can remove it from HA and re add it and it works, that is not the issue here. The issue is...
  7. G

    HA error state on some VMs (due to pve-cluster service crash?)

    After upgrading to Proxmox 6 i observed a random event - randomly VMs get into HA error states with a red circle above the VM icon. Their HA State becomes "error". The cluster seems fine otherwise. The logs indicate a "Main process exited, code=killed, status=6/ABRT" for the pve-cluster service...
  8. G

    Please fix download.proxmox.com certificate

    The certificate itself is valid (issued by Let's Encrypt) until October 2 2019, only it is issued to another subdomain - enterprise.proxmox.com instead of download.proxmox.com.
  9. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    It doesn't work, i tried it. When the VM is in this state, console, migration and backup are all unusable. Sometimes i found the machines when the backups finished and i saw the failure emails. So it is not triggered by the vnc console. And nothing seems to work but stopping the VM and starting...
  10. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    Unfortunately i have no known misbehaving VMs right now (i don't know how to issue bulk commands that might trigger it and they have no indication otherwise until you need a console, migration or backup). I restarted all of them and since this issue does not seem to have appear again.
  11. G

    [SOLVED] Lost Connections with VMs

    Hmm. Somehow the post had gone to the wrong topic (maybe related to the forum upgrade?)
  12. G

    How to remove a share NFS from Proxmox cluster?

    Both. Remove from the GUI then unmount. Otherwise after reboot the share will be back (either working or spamming with errors).
  13. G

    [SOLVED] Lost Connections with VMs

    The cluster seemed fine, i saw no indication of inter cluster communication at that time. I looked at continuous corosync quorum tool outputs and everything was just fine. And only some VMs had this issue. Others were available. Also, no VM ever recovered without being shut down and started again.
  14. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    If i remember correctly most were LVM (iscsi over LVM), but i think there were also some NFS ones. Unfortunately i cannot say for sure for NFS because lately we migrated quite some storages. But this issue is not only backup related. The machines cannot be managed as in no migrate or even vnc.
  15. G

    Separate Cluster Network

    We have a cluster with 3 nodes, each with 4x gbit onboard and 2x10gbit addin card (1x HP DL 385 G10 + 2x HP DL 325 G10): 2x gbit links for management/VM traffic/livemigrate (redundant, go into 2 stacked switches) 2xgbit links for cluster (redundant, go into 2 stacked switches) in an isolated...
  16. G

    Separate Cluster Network

    We have 2 redundant gbit links per server (2 stacked dedicated switches for the cluster) dedicated to corosync traffic. Is it recommended to use another network in this case too as redundancy for corosync?
  17. G

    Separate Cluster Network

    I was referring to the redundant corosync addresses. Tom's reply above references the redundant corosync addresses, ring0,1 etc. BTW i changed the corosync addresses to the new links we created in /etc/pve/corosync.conf and corosync just updated them runtime without skipping a beat.
  18. G

    Separate Cluster Network

    I was just preparing to change the PVE 6 cluster IPs. So then what needs to be done? Can't we just replace the old IPs with new ones from a new network? Do we need to have redundant links?
  19. G

    after upgrade to PVE 6.0 don't work AD auth with SSL

    So i got the solution for this. Openssl from Debian Buster has an enforced encryption of TLS 1.2. Older Windows Server versions have some older implementations, although they do seem to use 1.2 in the end. The solution is either 1. to change /etc/ssl/openssl.cnf and change MinProtocol =...
  20. G

    after upgrade to PVE 6.0 don't work AD auth with SSL

    I observed the same thing, in the pvedaemon logs i see authentication failure; rhost=host_ipaddr user=username@domain msg=Connection reset by peer