Search results

  1. K

    [SOLVED] ceph problems moving cluster to new subnet

    That doesn't work because no monitor is responding, "ceph -s" hangs, and no ceph commands responds. However, I found a solution, and posting it here to help other people that may run into this: I added the "old" IP address as a 2nd IP address on the monitor nodes. Then the monitors were able...
  2. K

    [SOLVED] ceph problems moving cluster to new subnet

    Hello.. We have a proxmox cluster that moved sites, and cannot move the old subnet to the new site. So far, I've done the following: - Brought up the cluster on a private network not connected to the (new) site network - Let ceph settle - Followed online tutorials to change the proxmox cluster...
  3. K

    upgrade from 7 to 8 hangs while importing rpool

    Hello.. I am upgrading a standalone server from proxmox 7.4 to 8. I followed the instructions and pve7to8 reported no issues. I've done multiple upgrades from 7 to 8 at work, but this is for a home system, and it's the first time it failed. After the reboot, the OS failed importing rpool...
  4. K

    ceph: cannot create monitor: monitor address 'xxx' already in use (500)

    Thanks (even though a bit late)! I remember trying both of those things back then, but it didn't help. I ended up moving the mon to another node, but it still bugs me that I couldn't find a rational reason why there was an issue.
  5. K

    ceph: cannot create monitor: monitor address 'xxx' already in use (500)

    Hello, I have a monitor node in our cluster that had an ungraceful reboot. After the node came up, the monitor could not join the cluster. After some retries I destroyed the monitor (from the GUI) and recreated it, but still could not join the cluster. I waited about a day "just in case" the...
  6. K

    Resizing root partition and giving proxmox more space

    It's been a while since I came across this, but I think I just ignored it. If I remember well, the error was reported because pve/data is already configured as thin-pool, so conversion to the same type is non-sensical (at least that's what lvconvert thinks).
  7. K

    Proxmox backup hangs pruning older backups

    But if you do have many nodes ... That feature request is when communicating to a backup server. Not sure if things are different when using the included backup utility to backup to local ceph storage. Oh yeah, about that. it's nice that there is an option to select a single node, but it...
  8. K

    Proxmox backup hangs pruning older backups

    Ok, understood.. However, in order to make the GUI more user-friendly I would recommend the following: Currently, in the edit backup job popup, there is a tick mark on the left column that allows you to select all container types in one click. That gives the impression that it's ok to do...
  9. K

    Proxmox backup hangs pruning older backups

    Yes, I am backing up multiple nodes at the same time. This used to work, however. Things used to grind to a halt, but eventually succeeded. Now, they just hung. Out of pveversion -v below # pveversion -v proxmox-ve: 7.2-1 (running kernel: 5.15.64-1-pve) pve-manager: 7.2-11 (running...
  10. K

    Proxmox backup hangs pruning older backups

    Hello, I have a problem when backing up to a ceph cluster of spinning disks. I have a cluster of 27 server-class nodes with 60 OSDs on a 10gig network. If I backup ~10 VM/CTs it works fine. Upping that number to ~20 the backup grinds to a halt (write bandwidth in the KB/s range) but...
  11. K

    Can I move a CEPH disk between nodes?

    The way I see it, if that method does not work, what's the alternative? Erase the drives and let ceph do it's thing. It's only a win to try it out.
  12. K

    Can I move a CEPH disk between nodes?

    No inconsistencies.. There is some initial delay bringing the OSDs up until they catch up, but that's the same you would get if a node is down for some time and the OSDs have to catch up. It takes me less I know exactly what you mean about green status. There's always something... It may...
  13. K

    Can I move a CEPH disk between nodes?

    Turns out clonezilla is not the solution. It gets confused with the tmeta partitions and croaks, even in "dd" mode. What I ended up is the following: - make sure there is no mgr/mds/mon daemons and no VMs/CTs on the node - tar up /var/lib/ceph on the old drive and store it somewhere on the...
  14. K

    Can I move a CEPH disk between nodes?

    clonezilla is probably what I will use, since I already have it on our PXE server and we've been using it for cloning and backing up physical machines for quite some time. I am just not sure how to resize the tmeta partitions that Proxmox uses..
  15. K

    Can I move a CEPH disk between nodes?

    Yeah, I thought about doing this, but I am not sure how to deal with the tmeta partitions... If you have any thoughts, I'd be glad to hear them..
  16. K

    Can I move a CEPH disk between nodes?

    Along the same lines, I have a similar issue: I want to replace the Proxmox boot drive with a larger one. It looks like the easiest solution is to: (a) migrate VMs/CTs to different nodes (b) shut down machine and update HW (c) re-install Proxmox on the new boot drive (d) remove "old" instance...
  17. K

    Confusing Ceph GUI Info when using multiple CephFS volumes

    Does anybody know if there are any updates on this? When would we expect Proxmox to support multiple CephFS filesystems on the GUI? Thanks
  18. K

    spurious kernel messages since upgrade to 7.2

    Did some digging and found out that these are spurious logs for thermal prochot throttling. Please take a look at the following link: https://www.spinics.net/lists/kernel/msg4380894.html I can verify that the following command: # wrmsr -a 0x19c 0x0a80 indeed silences the spurious messages...
  19. K

    spurious kernel messages since upgrade to 7.2

    @Bruno Félix , if you are still seeing this, can you please verify if the nodes that are having this problem also have an Intel Omnipath HFI card installed? If not that, maybe some other fabric card? We see this only on machines that have HFI cards.