Search results

  1. U

    VM Migration fails with "only root can set 'affinity' config"

    I ran into a somehow similar issue when migrating a VM, albeit not with the affinity setting but with a very special set of low level qemu args. I guess it boils down to the same privilege issue? 2025-05-29 11:00:32 ERROR: error - tunnel command...
  2. U

    negative SSD "Wearout -140%"

    your calculation makes sense, yes ... yet again, it is somehow hard to believe ... And so I decided to have a look at some other servers and devices coming with those devices and what shall I say, it appears, you're correct after all :) With a similar drive and the identical firmware I see...
  3. U

    negative SSD "Wearout -140%"

    yes, I'm aware of that limitation (that is shared by many cheap SSDs, even today). Like I said, hard to believe though that a pure boot device running PVE can generate so much written data.
  4. U

    negative SSD "Wearout -140%"

    No, percent lifetime remaining says 116 (RAW_VALUE). The 240 you are referring to are from the VALUE column. The VALUE column represents the "normalized", current value of the attribute on a scale of usually 1 to 253 (sometimes also 1 to 100). WORST is the worst value observed so far in the...
  5. U

    negative SSD "Wearout -140%"

    One of our servers uses two Crucial MX500 SSDs in a ZFS RAID1 setup as boot drives. By chance, I checked the servers' SMART values in the UI and it shows a whopping negative -140% wearout. Not sure what to make of this? In the shell, smartctl -a doesn't show anything extraordinary except the...
  6. U

    [SOLVED] increasing the size of a legacy /boot partition

    thanks @leesteken, the idea of shrinking the swap partition and using the additional space for the /boot partition got me on the right track :) I didn't use GParted but did it on the running node. In case anyone stumbles on this as well, here's what I did as root (assuming that md0 is mounted...
  7. U

    [SOLVED] increasing the size of a legacy /boot partition

    One of my PVE nodes has been installed some years ago and even though it has always upgraded it to the newest PVE version (currently 8.4.3), one thing has been bothering me for a while. When the node was installed, it was done by using ext4 & mdstat, ending in a partition layout like this: $...
  8. U

    PVE 8.3 node with btrfs booting into grub command line after hard reset

    thanks again @Fantu for those really helpful explanations! For the time being, I think I have experimented enough with the effects of this likely hardware issue and my next step would be to remove the faulty device from my RAID1 array or "profile" in btrfs terms :) Simply attempting to remove...
  9. U

    PVE 8.3 node with btrfs booting into grub command line after hard reset

    First thank you @Fantu and @waltar for your support so far. I've had a little time today to investigate the issue further, and I found some strange things. This time, from the grub command line, which I ended upon reboot, I could successfully do a ls (hd0,gpt3)/ as well as a ls (hd1,gpt3)/...
  10. U

    PVE 8.3 node with btrfs booting into grub command line after hard reset

    well, looks like I'm back at square one: the node again only boots into grub. Yesterday evening I decided to start a scrub, and that ended in low level block storage errors like these: [27979.988958] nvme nvme1: I/O tag 604 (b25c) opcode 0x2 (I/O Cmd) QID 8 timeout, aborting req_op:READ(0)...
  11. U

    PVE 8.3 node with btrfs booting into grub command line after hard reset

    Thanks for the extensive explanation. What I did with clonezilla is to just see if I could manually mount the two btrfs partitions like I did before with the proxmox live system. And I could, apparently, it just worked. I didn't perform any cloning or whatever, but headed directly to the...
  12. U

    PVE 8.3 node with btrfs booting into grub command line after hard reset

    well, and it get's even more fascinating. After just mounting and subsequently unmounting the two partitions in clonezilla, I attempted a simple reboot into proxmox and that worked. After booting, the ring buffer contains heaps of lines like these, however: [Mon Jan 6 15:47:54 2025] BTRFS...
  13. U

    PVE 8.3 node with btrfs booting into grub command line after hard reset

    I am afraid, I wasn't clear enough, sorry. blkid shows that both partitions on both NVMes have the same UUID (see my last screenshot). What is different, is the error message I get. One says "device 1 uuid 3f3d..." and when trying with the other NVMe, I get a different "device 2 uuid 6...."...
  14. U

    PVE 8.3 node with btrfs booting into grub command line after hard reset

    it's the same, unfortunately, just bailing out with a different UUID
  15. U

    PVE 8.3 node with btrfs booting into grub command line after hard reset

    Hi, for my home lab, I wanted to give btrfs a try and since PVE supports btrfs+RAID on root, I decided to give it a try and added a PVE 8.3 node with exactly that into my little 2 node home lab. Things went smoothly so far, but after a recent hard reboot, the node doesn't boot anymore, but...
  16. U

    Issues with HP P420i and SMART

    you are trying to open a binary file (/usr/bin/smartctl) with a text editor, that won't work.
  17. U

    [SOLVED] VMs freeze with 100% CPU

    I think it's this commit https://lists.proxmox.com/pipermail/pve-devel/2023-September/058995.html
  18. U

    [SOLVED] VMs freeze with 100% CPU

    Thanks @fiona and @fweber for your efforts here, this seems to be a particularly nasty bug ... I can confirm this, too, I could indeed revive such a stuck VM by hibernating and then resuming it: # qm suspend 150 --todisk 1 Logical volume "vm-150-state-suspend-2023-08-03" created. State...
  19. U

    [SOLVED] VMs freeze with 100% CPU

    I remember having tried with both KSM and/or ballooning disabled months ago, to no avail. Maybe it improves the situation, but at least for us it did not make a real difference.