Search results

  1. F

    System hanging after upgrade...NIC driver?

    Answering myself. System still crashing with 6.2.16-19-pve
  2. F

    System hanging after upgrade...NIC driver?

    Just upgraded a test-server with standard no-subscription repo enabled to Kernel 6.2.16-19 . Let's see if it's stable.
  3. F

    [SOLVED] How to recover failed Raid / access storage

    Better not use ZFS on top of Hardware-Raid. Get a cheap HBA-Controller instead so that ZFS can fully access the disks.
  4. F

    [SOLVED] How to recover failed Raid / access storage

    Great news! An please use more than one disk for your pool ;)
  5. F

    [SOLVED] How to recover failed Raid / access storage

    Seems like you didn't use use zvol block storage for you VMs? So if these are just raw or qcow files you should be fine. Just give it a try. You can copy your proxmox config from /etc/pve/nodes/....<your-node-name>../qemu-server/100.conf to new server. Is your proxmox running? If so - create a...
  6. F

    [SOLVED] How to recover failed Raid / access storage

    the zfs list outputs your vm-disks. now try to rescue them. (e.g. zfs send and receive to other machine). Running zfs only on one disk is not good practice.
  7. F

    [SOLVED] How to recover failed Raid / access storage

    I'm not sure, but it might be that there is missing something in my instruction. I would expect to have the actual disk-id written after the /dev/disk/by-id/ - ? But it seems you have not built your pool with "by-id" but with /dev/.... (see zdb output). so try to zpool import with /dev/sdb1 (or...
  8. F

    [SOLVED] How to recover failed Raid / access storage

    Hi @novafreak69, it's been a while since I wrote that. I can't fully remember. But basically it depends what you have stored. If you have the pool mounted you might just copy the files to another target.
  9. F

    Proxmox Backup Server (beta)

    Sounds awesome, and just published in the right time. Will surely test it....
  10. F

    [SOLVED] How to recover failed Raid / access storage

    Hi guys, thanks for the hint. This helped in looking on the old PVE-Installation which on the other hand didn't help to access the data on the other drive. But the solution is quite easy (but took me a lot of time to figure out). So for others with similar problems. 1. use "fdisk -l" to check...
  11. F

    [SOLVED] How to recover failed Raid / access storage

    Hi all, a friend of a fried has had a Proxmox running. And as I sometimes play around with Proxmox I was asked for help. The situation is the following. The server hat 3 disks. One SATA-DOM with Proxmox (3.x *argh*) installed. And two spinning drives which are in a raid somehow. Now one of...
  12. F

    [SOLVED] Reinstall Proxmox-Ceph Node after Crash

    I solved this. Will update this post with description later....
  13. F

    Recover image from pool (set min_size)

    Hi, i recovered the "ohter" server so the pool healed himself with 2 available OSD. Anyway it would be interesting if this would have worked? Anyone? br Tim
  14. F

    Recover image from pool (set min_size)

    Hi there, beside my problem with the crashed ceph-cluster (see https://forum.proxmox.com/threads/reinstall-proxmox-ceph-node-after-crash.47142/) I have another problem. I have a separate SSD-Pool with one SSD on each ceph-host. Ceph-1 is still down (see other thread) and the SSD in Ceph-3...
  15. F

    [SOLVED] Reinstall Proxmox-Ceph Node after Crash

    Mist, falsches Forum erwischt. Kann ein Mod das in das engl. verschieben?
  16. F

    [SOLVED] Reinstall Proxmox-Ceph Node after Crash

    Hi all, we had a major outage of our cluster consisting of 3 PVE Host which servers Containers and VMs. In addition there are 3 Proxmox based Ceph servers. I was in the preperation to switch from running 4.4 to 5.x. 1. ceph 3 didn't come up after reboot. It turned out that the HBA controller...
  17. F

    [SOLVED] Upgrade hanging

    Thanks! this worked for me as well!
  18. F

    [SOLVED] Upgrade hanging

    Hi, today I wanted to dist-upgrade my cluster. After the three "VM"-Server I wanted to upgrade the other 3 ceph-severs. But on the first one the "apt-get dist-upgrade" stucked. [...] Setting up ceph-common (10.2.6-1~bpo80+1) ... Setting system user ceph properties..usermod: no changes ..done...
  19. F

    ceph ssd and sata pools

    I followed (more or less ;-) ) this guide: https://elkano.org/blog/ceph-sata-ssd-pools-server-editing-crushmap/ This worked so far. Only one small issue which was discussed here: https://forum.proxmox.com/threads/ceph-move-osd-to-new-root.31912/#post-158165