Search results

  1. U

    Need Guidance on Replacing SSDs in ZFS Pool with Limited Disk Slots

    You can check with proxmox-boot-tool status if the bootloaders is installed on both disks, unfortunatly you can not really check if the system will boot without booting it ;)
  2. U

    Need Guidance on Replacing SSDs in ZFS Pool with Limited Disk Slots

    This should about work, you do not need the detach, after shutdown and physical disk replacement just copy the partition table : Copy Partition, remove uuid and label in the process (so that they are not duplicates) sfdisk -d /dev/WORKING | sed ’s/, uuid.*//; /label-id/d;’ |sfdisk...
  3. U

    Going back to windows 11 pro after 7 very good Proxmox years

    That sounds very interresting, have you documented your setup somewhere?
  4. U

    Backup to external Disks - multiple Issues

    No, in this case i have 3 Disks with zpool zp_extern1 to zp_extern3. The all have a zfs subvolume backup-extern with the mountpoint /backup-extern root@srv01:~# zfs list NAME USED AVAIL REFER MOUNTPOINT zp_extern3 4.13T 13.9T 192K none...
  5. U

    Going back to windows 11 pro after 7 very good Proxmox years

    If you run pve7 currently (even if it is out of service), you can still reinstall a new proxmox7 (even if not recommended)
  6. U

    Backup to external Disks - multiple Issues

    1. You do not need a second PBS (even if it is nice to have;) ). PBS now supports a local sync. 2. You also do not need ZFS for PBS on your external storage disks, any Linux FS will do. This is how i did it: /usr/local/bin/backup-replicate-pbs.sh #!/bin/bash export...
  7. U

    [SOLVED] Pseudoterminal während Upgrade verloren

    Einfach das upgrade neu starten (apt full-upgrade), es kann sein das du vorher ein dpkg --configure -a machen musst (sagt dir apt dann) Falls die apt / dpkg Prozesse noch laufen notfalls killen und falls noch lock files vorhanden sind, dann bitte löschen
  8. U

    Going back to windows 11 pro after 7 very good Proxmox years

    PVE host backup in PBS is in the works, afaik. ( I hope it is ready soon) But the idea is there is not much important data on the PVE host itself, just network Config, storage config and so on. If your PVE is part of a Proxmox cluster, you just remove it from th cluster, do a fresh pve...
  9. U

    Unable to create ZFS storage

    You can also use wipefs -a on your disk, make sure it is not used first
  10. U

    Going back to windows 11 pro after 7 very good Proxmox years

    Don't forget to use crowdstrike to secure your windows experience;) Sorry your post looks like clickbait, how in the world are you not able to recover your proxmox from your backups. If you use REAR, restore PVE from REAR, restore VMs from pbs. If REAR does not work: Install proxmox, apply...
  11. U

    Can't boot PVE after multiple ungraceful shutdown

    You broke it, you can keep the pieces ... If fsck can not fix it, saving your data and doing a new install is probably your best option ALWAYS have backups, use PBS
  12. U

    Ubuntu VM Can't get apache2 to work outside the network

    Private Networks Like 192.168.x.y, 172.16-31.x.y and 10.x.y.z are NOT routed on the internet. If you have a static IP on your router you can set can do this setup with that IP and port forwarding
  13. U

    VM Disk auf Host größer als überall angezeigt

    Qcow2 kann Snapshots enthalten, vielleicht funktioniert auch fstrim nicht richtig, check bitte mal mit: qemu-img info /var/lib/vz/images/107/vm-107-disk-0.qcow2 dann (bei abgeschalteter VM) qemu-img convert /var/lib/vz/images/107/vm-107-disk-0.qcow2 -O qcow2...
  14. U

    help! /etc/systemd/system deleted, pve still live ...

    You should be able to copy that directory from another proxmox install without problems
  15. U

    Lost in Transaction

    Ich nehme an bei einem der Löschvorgänge wurden die images gelöscht. Wenn eine Datei die von einem Prozess geöffnet ist gelöscht wird bleibt sie für diesen Prozess (hier kvm) zugreifbar. Wird der Prozess beendet wird die Datei gelöscht.
  16. U

    [SOLVED] Node reboots unexpectedly when quorum is lost

    Works as advertised, your cluster has 3 votes. With one node away there are 2 votes left, 2 from 3 is a majority. If your raspi goes away there is only 1 vote left that is a minority, so it quits
  17. U

    CT container can't access network

    Now post the data of the broken container
  18. U

    [SOLVED] no route to host

    Also Output from ss -tlpen
  19. U

    [SOLVED] no route to host

    watch your syslog for blocked / dropped packets: journalctl -f while trying ssh connection. Also can you connect from the rocky linux instance to itself? user@rocky:# ssh -v 192.168.1.8