Search results

  1. U

    Backup to external Disks - multiple Issues

    No, in this case i have 3 Disks with zpool zp_extern1 to zp_extern3. The all have a zfs subvolume backup-extern with the mountpoint /backup-extern root@srv01:~# zfs list NAME USED AVAIL REFER MOUNTPOINT zp_extern3 4.13T 13.9T 192K none...
  2. U

    Going back to windows 11 pro after 7 very good Proxmox years

    If you run pve7 currently (even if it is out of service), you can still reinstall a new proxmox7 (even if not recommended)
  3. U

    Backup to external Disks - multiple Issues

    1. You do not need a second PBS (even if it is nice to have;) ). PBS now supports a local sync. 2. You also do not need ZFS for PBS on your external storage disks, any Linux FS will do. This is how i did it: /usr/local/bin/backup-replicate-pbs.sh #!/bin/bash export...
  4. U

    [SOLVED] Pseudoterminal während Upgrade verloren

    Einfach das upgrade neu starten (apt full-upgrade), es kann sein das du vorher ein dpkg --configure -a machen musst (sagt dir apt dann) Falls die apt / dpkg Prozesse noch laufen notfalls killen und falls noch lock files vorhanden sind, dann bitte löschen
  5. U

    Going back to windows 11 pro after 7 very good Proxmox years

    PVE host backup in PBS is in the works, afaik. ( I hope it is ready soon) But the idea is there is not much important data on the PVE host itself, just network Config, storage config and so on. If your PVE is part of a Proxmox cluster, you just remove it from th cluster, do a fresh pve...
  6. U

    Unable to create ZFS storage

    You can also use wipefs -a on your disk, make sure it is not used first
  7. U

    Going back to windows 11 pro after 7 very good Proxmox years

    Don't forget to use crowdstrike to secure your windows experience;) Sorry your post looks like clickbait, how in the world are you not able to recover your proxmox from your backups. If you use REAR, restore PVE from REAR, restore VMs from pbs. If REAR does not work: Install proxmox, apply...
  8. U

    Can't boot PVE after multiple ungraceful shutdown

    You broke it, you can keep the pieces ... If fsck can not fix it, saving your data and doing a new install is probably your best option ALWAYS have backups, use PBS
  9. U

    Ubuntu VM Can't get apache2 to work outside the network

    Private Networks Like 192.168.x.y, 172.16-31.x.y and 10.x.y.z are NOT routed on the internet. If you have a static IP on your router you can set can do this setup with that IP and port forwarding
  10. U

    VM Disk auf Host größer als überall angezeigt

    Qcow2 kann Snapshots enthalten, vielleicht funktioniert auch fstrim nicht richtig, check bitte mal mit: qemu-img info /var/lib/vz/images/107/vm-107-disk-0.qcow2 dann (bei abgeschalteter VM) qemu-img convert /var/lib/vz/images/107/vm-107-disk-0.qcow2 -O qcow2...
  11. U

    help! /etc/systemd/system deleted, pve still live ...

    You should be able to copy that directory from another proxmox install without problems
  12. U

    Lost in Transaction

    Ich nehme an bei einem der Löschvorgänge wurden die images gelöscht. Wenn eine Datei die von einem Prozess geöffnet ist gelöscht wird bleibt sie für diesen Prozess (hier kvm) zugreifbar. Wird der Prozess beendet wird die Datei gelöscht.
  13. U

    [SOLVED] Node reboots unexpectedly when quorum is lost

    Works as advertised, your cluster has 3 votes. With one node away there are 2 votes left, 2 from 3 is a majority. If your raspi goes away there is only 1 vote left that is a minority, so it quits
  14. U

    CT container can't access network

    Now post the data of the broken container
  15. U

    [SOLVED] no route to host

    Also Output from ss -tlpen
  16. U

    [SOLVED] no route to host

    watch your syslog for blocked / dropped packets: journalctl -f while trying ssh connection. Also can you connect from the rocky linux instance to itself? user@rocky:# ssh -v 192.168.1.8
  17. U

    [SOLVED] no route to host

    Do a tcpdump -ni any port 22 inside your VM and try to SSH into it. Check if you see incoming and or outgoing packets
  18. U

    Issue resizing LXC Container Storage

    Your zfs pool is almost full, so the subvolume 100 is almost full, too. Technically pct resize sets the refquota, which limits the space the subvolume can take inside the pool, but your pool is full.
  19. U

    Issue resizing LXC Container Storage

    Do a zfs list on your proxmox host and post the result
  20. U

    Container using all disk space?

    Then it was created a sparse file, in a sparse file instead of a million zeros being written Linux will write "here come a million 0" You see the difference with ls -l file and du file. ls will show you the nominal size, du the real size on disk. But the raw file will not shrink