You have a nvme-2tb/vm-130-disk-1 "Disk", this is a zfs volume witch use: "referenced 443G".
Then with this zfs snapshot nvme-2tb/vm-130-disk-1@__replicate_130-0_1766276210__ it will use: "referenced 443G".
In toal, with all additional zfs...
Da musst du mal bei Pure schauen. Ich habe keinen Zugriff, weil alles hinter der Paywall.
Mit ZFS over iSCSI kannst du einen ZFS Pool über iSCSI zugreifbar machen, dafür brauchst du aber ein Storage was mit ZFS arbeitet.
Bei shared Storage am...
This worked for me. But also the disk was not that empty i expected.
The more space was needed by not trimming the filesystems. So i activated the Discard-function on every disks on every vm an trimmed after that. Now the spaced was aclaimed...
Klar. Ich lösche ja nicht willenlos rum. /var/cache/ ist aber per definitionem extrem volatil und dessen Inhalt sollte immer gelöscht werden können.
Gerne wird ein Ordner nur bei der Installation angelegt und nie wieder auf Vorhandensein...
Since proxmox upgraded to zfs 2.3.4 I'm having problems with accessing snapshots, it causes kernel panics, making the access hang indefinitely.
Part of my backup strategy is automated backups for container snapshots, but this is broken by this...
So they use the same units, fine.
Anyway, don't remove full 200 GiB, just in case. You'll be trying to decrease it more, later, anyway.
Not really. The data is still there. But as I wrote, don't start it.
Yes, that's what I wrote.
I'm a little...
I have exactly same issue on lxc container, though my manger is incus but roughly same issue, currently workaround is, it appears working if disable --now sshd.socket. Don't know why and systemd debug log didn't give me enough clue to figure it...
Do you know if you are using Grub, or Systemd to boot?
You may want to check https://forum.proxmox.com/threads/update-to-version-9-fail-systemd-boot.172356/post-813782. If you are not using ZFS for your root drive, then it should be using Grub...
So, I have tried that and also changed it to an AMD card I had lying around. Nothing it went kinda through and then black but nothing could not input anything.
I have tested a bit more. You are totaly right.
The command should be without the ending "/":
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/bus/usb/003/002 dev/bus/usb none bind,optional,create=file
The first...
Nun habe ich allen meinen Mut aktiviert und den cache-Ordner komplett geleert und neu gestartet.
Alle VM/LXC's sind erreichbar.
Die WebGUI ist ist allerdings platt, obwohl nmap auf 8006 "open" zeigt.
purge und Konsorten mache ich sowieso immer...
Hello,
I have 3 proxmox backup servers, one main and two remotes. the remotes share the same hardware and are configured the same.
One of the remotes has recently been moved off site to replace an existing remote PBS.
Prior to the move sync jobs...
Thanks for helping out, @Onslow.
On PVE, lvdisplay says about the device:
--- Logical volume ---
LV Path /dev/pve/vm-100-disk-2
LV Name vm-100-disk-2
VG Name pve
LV UUID...
TIL about gdu. Deutlich flotter als ncdu. :)
@TErxleben je nachdem wie alt der Host ist und was da abseits von PVE irgendwann mal drauf installiert wurde, kann auch ein apt autoremove und apt purge manchmal ganz gut tun. Auch alte Kernel...
Thanks for the tip. We added the IPs of the clusters to the firewall, and it worked! Just had to map the server names to the private network in /etc/hosts to force the migration over the PVN, and use detailed mapping, and everything went as...
I dont have a problem opening the .VV file, I am looking for a way to not have to click the 'SPICE' button via Proxmox UI. I was hoping for a way similar to RDP, type in the IP address and connect.