Search results

  1. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    I re-added p38 and it worked this time. Now I will move VMs over and update and reboot old nodes. Somehow something went wrong when adding it the first time. I guess using HTTP GUI process for adding nodes could be improved to address issues like mine.
  2. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    I removed node p38 and now GUI shows cluster. Now back to the original plan. Hopefully someone will find my information in the future useful, that's why I'm writing all this.
  3. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    Maybe I can get it back by removing p38: pvesh get /cluster/config/join --output-format json-pretty '/etc/pve/nodes/p38/pve-ssl.pem' does not exist!
  4. M

    [SOLVED] I don't know if it's a bug, but my disk is gone

    It is not a bug, but expected behavior. You told PM to delete VM with it's disk and it did just that. All data was actually deleted only after the running VM was shut down, because it still had file pointer open to the disk file, allowing you to transfer data off.
  5. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    I wanted to follow the procedure as described above, but noticed I do not have a cluster anymore, while cluster still works?!? So GUI shows no cluster, but shows cluster nodes, pvecm shows as usual. root@p35:~# pvecm status Cluster information ------------------- Name: XYZ Config...
  6. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    Hi @tom thank you for your suggestion. I was just in a process of updating. I want to add new node to the cluster, so I can live migrate VMs to it and then update and reboot old node. Repeat the process with all nodes. But I can not do that, because I can not add new node, to migrate VMs to...
  7. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    I ran out of time to continue debugging. I will provide more info from the new node, now running reparately from this cluster network, so I can look at its files. However, I think something went horribly wrong when joining a cluster and it might even be a bug.
  8. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    Looking at the logs on primary cluster node, where new cluster node was joined I see these errors: Jan 29 14:44:38 p35 pmxcfs[6037]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 4) Jan 29 14:44:38 p35 corosync[6204]: [CFG ] Config reload requested by...
  9. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    I did start it again, to get some more errors. Once it started cluster on old nodes stopped working. pvecm status did not return any value as long as new node was online and this was logged. Jan 29 15:06:19 p35 corosync[6204]: [TOTEM ] A new membership (1.55d) was formed. Members Jan 29...
  10. M

    [BUG] PM 6 adding new node, all other PVE stopped working && GUI shows no cluster, while listing cluste nodes!

    Hi, i did what I did many times before. Added a node to existing cluster. After adding it, whole cluster went down (VMs were running but PVE stopped). Here is how it looked on one node: [Fri Jan 29 14:46:36 2021] INFO: task pvesr:42198 blocked for more than 120 seconds. [Fri Jan 29 14:46:36...
  11. M

    ZFS mirror issues

    Uf, .. you can not. ZFS does not (yet) support removing of devices. You basically created RAID 0 (extended ZFS over two disks) and if you remove one, it will be missing half of the data. There are two options. Create a new pool and send data over. Create RAID 10 with 4 disks, by adding mirror...
  12. M

    Using HDD pool slows down SSD pool on the same server

    Tuxis, tnx for the info. I know SLOG is used only for sync writes, so it really should not be a connecting point that slows down SSDs also. tburger, tnx for the ideas. I have on SAS controller: 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT...
  13. M

    Suspended ZFS pool affects all other VMs on node (even if no vDisks are affected)

    Seems like you issue has gained traction now. Please do report back. I'm curious. :-)
  14. M

    Suspended ZFS pool affects all other VMs on node (even if no vDisks are affected)

    As you use failing SATA drives, when a failure accurs, system tries many times to access data. While it is doing so, it can not use other SATA drives and other arrays can also be affected. I understand that you use different SATA controllers, but obviously they somehow influence each other. If...
  15. M

    Using HDD pool slows down SSD pool on the same server

    Here are some graphs> I assure you the system can not do 80 G RW ps. :-)
  16. M

    Using HDD pool slows down SSD pool on the same server

    I agree I could do some more testing, but do not have the time a.t.m. nor I wish to play on production cluster. I will just pull these HDDs out and create HDD only nodes. When have the time, setup another node for testing this scenario. I also think I know what your problem is and will reply...
  17. M

    Using HDD pool slows down SSD pool on the same server

    I have the same assumption, but have no idea how to monitor queue depth on hardware (SATA/SAS) controller. I might be able to monitor and adjust queue depth on ZFS, as I remember vaguely such options. They are Seagate Enterprise Capacity 3.5 HDD 6TB 7200RPM 12Gb/s SAS 128 MB Cache Internal...
  18. M

    Using HDD pool slows down SSD pool on the same server

    Hi guys, a few days ago, I did backup import onto HDD pool on the same server that also has SSD pool. After a few minutes all guest VMs (they run only on SSD pool) started reporting problems with hung tasks, etc and services on them stopped working. Host has had high IOWait and low CPU usage...
  19. M

    Delete multiple backups at once.

    https://bugzilla.proxmox.com/show_bug.cgi?id=3260

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!