Search results

  1. A

    Shutdown applied to all nodes?!

    Ohh, just wait till you use pveceph purge and realize it wipes the entire CEPH production cluster. You then check the documentation once again to see if it was anywhere noted that it wipes the entire 30TB storage cluster. Nope... nowhere mentioned! Subsequent pondering ensured. pvecm and qm...
  2. A

    Failed to start Import ZFS pool SAN\x2dpxmx

    add rootdelay=20 to either grub.cfg in etc/default/grub run update-grub or add it to etc/kernel/cmdline run pve-efitool-boot-whatever refresh. It is apparantly a new feature to have to do this upon fresh install in version 7
  3. A

    pveceph purge wipes entire ceph cluster, not just a host.

    Oh and upon configuring the cluster... loads of errors. Reinstall proxmox for the f..... 10th time! Proxmox 7 -> Proxmox ME Proxmox Vista is next?
  4. A

    Deleted ceph on node, stupid!

    The stupidity here lies not with you, it lies with extremely lacking help file for pveceph help purge. Once again, lacking or misleading documentation. I just wiped a cluster this way aswell... Where should I send the bill for restoring this mess? Someone has showed gross negligence and it is...
  5. A

    pveceph purge wipes entire ceph cluster, not just a host.

    Yeah, one of those proxmox adventures where shit just hits the fan for absolutely no obvious reason. Please Add to the help file the definition of what ceph related data and configuration files and where, because it is not local.. I am very very very anoyed! USAGE: pveceph purge [OPTIONS]...
  6. A

    Shell.... ends when visiting another pane.

    So, installed a server today, used the GUI to do an apt upgrade. Briefly checked another part of the UI and browsed away from the window. As I return, a new session was open and the apt-upgrade could no longer be connected to. This feature singlehandedly forced me to reinstall the server. Why...
  7. A

    [SOLVED] sda has a holder...

    Solution once and for all to all those "disk is busy" "disk has holder" disk has....." root@pve01:~# sgdisk --zap-all /dev/sdx root@pve01:~# readlink /sys/block/sdx ../devices/pci0000:00/0000:00:01.1/0000:01:00.0/host5/port-5:10/end_device-5:10/target5:0:10/5:0:10:0/block/sdx root@pve01:~# echo...
  8. A

    [SOLVED] sda has a holder...

    How does one KILL THAT DEVICE MAPPER SO THAT I CAN USE THE DISKS!!! AND NO... REBOOT IS NOT THE CORRECT ANSWER!! sgdisk --zap-all no effect dd ******* of=/dev/sdX = NO EFFECT!
  9. A

    [SOLVED] sda has a holder...

    So what? I do not care that it has a holder. Kill the holder and wipe the disk as asked! What is the reason for all these workarounds we constantly have to do to administrate these systems? The disk was previously in a CEPH installation, it is no more, I need it for something else! I can't...
  10. A

    Unable to run backup because two hosts with 0 vm's are offline

    How inconvenient... that the two nodes with absolutely 0 vm's are offline. The VM's to be backed up are located on the two online nodes. Why can proxmox backup server not figure out to do the damn backup? So, as a silly work around I now have to set the backup job to only run on server 1 first...
  11. A

    Let me delete the stuff I decide to delete!

    The disks are not listed as unsused under vm 102. They were not present in the config files at all. My workaround solved the issue.
  12. A

    Let me delete the stuff I decide to delete!

    VM102 USED to beplaced on ceph. I moved the disk away without ticking the "delete source" Now I want to delete this, I do not want to fight with silly popups telling me completely irrelevant stuff... So, here is the wierd workaround I have to do? 1. Shutdown vm102 2. copy the 102.cfg file...
  13. A

    3 hosts bricked due to apt upgrade

    trying dist-upgrading pve02 today. This was the result: run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.11.22-5-pve /boot/vmlinuz-5.11.22-5-pve Generating grub configuration file ... Found linux image: /boot/vmlinuz-5.11.22-5-pve Found initrd image: /boot/initrd.img-5.11.22-5-pve...
  14. A

    3 hosts bricked due to apt upgrade

    I can attempt so. Currently the systems are running somewhat crippled, but stable. I will reinstall the cluster during the coming weekend. Before reinstalling I will attempt your suggestion and return to you with results.
  15. A

    3 hosts bricked due to apt upgrade

    pve01 also does not like the kernel package.
  16. A

    3 hosts bricked due to apt upgrade

    Screenshot of PVE01 trying to boot the -4 kernel. Before i replaced stuff in grub.conf to boot -3
  17. A

    3 hosts bricked due to apt upgrade

    Not on three different nodes. one does not have ECC, two others do. Ryzen 5 PRO 4650G, Ryzen 7 3800X, and an i3. I just tried on a laptop with a celeron in it, same bingo. I am network corruption would show up on my MPTCP router. It seems the cluster "behaves" somewhat stable now. Also, i...
  18. A

    3 hosts bricked due to apt upgrade

    disk errors? -Nope.. pool: rpool state: ONLINE scan: scrub repaired 0B in 00:00:42 with 0 errors on Thu Sep 30 23:06:02 2021 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0...