Recent content by grobs

  1. G

    [Proxmox 6] How to add two existing nodes (with containers) to a cluster?

    Hi, I'm currently running 2 standalone Proxmox 6 nodes and I want them to be part of the same cluster. Containers' IDs are not overlapping from one node to the other. Is there a way to create a cluster with those two nodes without dump->transfer->import every container? Regards
  2. G

    Corosync disaster (different version between nodes)

    In order to begin somewhere, here is the state of one of the 11 nodes (pm6-staging-03): pm6-staging-03:~# pvecm status Can't use an undefined value as a HASH reference at /usr/share/perl5/PVE/CLI/pvecm.pm line 479, <DATA> line 755. pm6-staging-03:~# systemctl status pve-cluster ●...
  3. G

    Corosync disaster (different version between nodes)

    Unfortunately not... We were experiencing very bad issues on the cluster the day before those changes and that was kind of a "last chance" change.
  4. G

    Corosync disaster (different version between nodes)

    Hi, Short story: I have 2 different versions of corosync configuration on my cluster and now "pvecm status" gives me this ugly error: "Can't use an undefined value as a HASH reference at /usr/share/perl5/PVE/CLI/pvecm.pm line 479, <DATA> line 755." My cluster is totally broken. Long story: I'm...
  5. G

    Understand LXC container's storage size

    No, I see 806G instead of 900G. Before snapshot removal (I had done those commands and noted the results): ON PROXMOX HOST : # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 782G 62.9G 104K /rpool rpool/ROOT...
  6. G

    Understand LXC container's storage size

    I only have done it by the web UI (Resources > Resize disk). I didn't obtain any error message. I tried to reboot the container but notheing changed. Other poentially interesting info: the container had a snapshot. Removing it made the container 100G larger (don't understand why). Here is the...
  7. G

    Understand LXC container's storage size

    Hi, I have an LXC container that has a big storage defined in the web UI ("900G") but the filesystem seems not to have taken the last resizes commands into account. Here is the current state of the storage: On the container: #df -h Filesystem Size Used Avail Use% Mounted...
  8. G

    Cluster Greyed Out

    => You're right. In this case, if the solution we find is to reboot, the issue shoudn't come back. I'll notice you. => I've already executed those commands on every node of the cluster, as described in the first post, but without any change. For information, I was'nt able to add RAM to the...
  9. G

    Cluster Greyed Out

    Even if it could solve the issue, reboot isn't a suitable solution in production and this issue should be deeply investigated in my opinion.
  10. G

    Cluster Greyed Out

    Hi everyone, My Proxmox 6 Cluster is in a bad state. I wanted to add RAM to one of my containers (on which some processes were killed by OOM Killer) using the GUI and it displayed this error: A day after this issue, the cluster was in a very strange state: The quorum is OK (maximum votes)...
  11. G

    [SOLVED] Some services fail to start, trying to set up mount namespacing

    Ok thanks. I created an issu on the munin-monitoring's GitHub: https://github.com/munin-monitoring/munin/issues/1278
  12. G

    [SOLVED] Some services fail to start, trying to set up mount namespacing

    Ok, I understand, thanks for those informations. Is there any way to allow only namespace spawning rather than enabling the whole nesting feature? The fact that the guest could have access to /proc and /sys on the host is pretty bad actually so it looks more like a workaround than a real solution.
  13. G

    [SOLVED] Some services fail to start, trying to set up mount namespacing

    In fact no, and it works! Thanks you very much for the quick reply! I don't really understand what this setting allows (containers in containers?). Is there any security risk with enabling this feature? And if no, why isn't it enabled by default?
  14. G

    [SOLVED] Some services fail to start, trying to set up mount namespacing

    Hi everyone, I'm currently struggling with a blocking issue. I'm running proxmox 6 (proxmox-ve: 6.1-2 / kernel 5.3.10-1-pve) and created a container running Debian Buster from the latest standard template on pveam (debian-10.0-standard_10.0-1_amd64.tar.gz). The problem is that some services...
  15. G

    [SOLVED] Cluster node under ZFS in a strange state (containers greyed out)

    Unfortunately, this issue is still present in 4.15. I added informations there: https://bugzilla.proxmox.com/show_bug.cgi?id=1943