This post is to report/complain about/request a fix for the inconsistent units used by the memory stats in the GUI.
Cluster Summary: GiB (base 2 / 1024)
Node or VM Summary bar (in the first section with the hostname): GiB (base 2 / 1024)
Node or VM Summary Memory usage graph: GB (base 10 /...
I just realized - referservation is how things are handled when you don’t use sparse/thin-provision on ZFS. So when you said before that you didn’t have sparse enabled, that is why. The setting really only affects when a new virtual disk is crreated, and unlike other storage systems, ZFS lets...
Setting refreservation=none will remove it, and should see the Used size decrease to equal to the Refer. As to how you ended up with these, I have no clue.
Ok right, the names of properties can vary a bit between ZFS implementations (most of the documentation that exists is for the original made by Sun, as opposed to OpenZFS and ZFSonLinux). So, notice how your refreservation on two of those volumes is the same as the amount they are “using”...
Ah here we go. See the values in “USEDREFRESERV” column? That indicates that you have created minimum size reservations on those volumes, and they haven’t used all of their reserved space yet. This document should help https://docs.oracle.com/cd/E19253-01/819-5461/gazvb/index.html - in...
How about “zfs list -t all -o space” and “zfs get compressratio”? The first should help show exactly where that space is being used, the latter is a fairly obvious command.
When you "erase" all the unused space with zeroes, that only affects the current/live version of the volume. The old bits that were overwritten are still going to be kept in your snapshots - otherwise, how would you recover that data?. Check your snapshots with a command like "zfs list -t...
I would suggest making this show up in the Status section of the storage's Summary pane, because otherwise you have an error indication with no way of seeing what the error is. My other suggestion is that this percentage should be configurable,e.g. for Ceph I know its common to not want to go...
"Refer" is the amount that the current item actually uses. "Used" includes all space taken up by snapshots for that volume. So you have 20.9GB for the VM at present, plus 39.1GB from snapshots.
Use "zfs get compressratio -t volume" to see compression ratio, and "zfs list -t all" to see your...
There is another topic about the same issue: https://forum.proxmox.com/threads/nfs-brown-schriek.38588/. People are seeing this on NFS and ZFS also. Needs to have some explanation either in the UI or the wiki ( https://pve.proxmox.com/wiki/Storage )
Just got this on one of my ZFS-backed storages, just on one machine. 3.55TB free out of 9.5TB total (usable space, not raw), and definitely not over-provisioned (yes I'm using RAW volumes with ZFS that are "thin provisioned", but the total "promised" to the VMs is less than the capacity of the...
Try dragging the gray column on the left edge out to the right. Looks like you completely hid your navigation tree sidebar. If that fails, click the gear next to 'root@pam' at the top, and then click "Reset Layout".
I have an old 1GB USB drive that I want to dedicate to being my Proxmox install stick (I've already labeled it as such). I tried to set it up the same way as I always have with other USB sticks:
dd if=/dev/zero of=/dev/sdX status=progress
dd if=proxmox-ve_*.iso of=/dev/sdX status=progress
sync...
Shutdown both machines (or at least, shut down the one you are removing the drive from). Manually edit the config files for both VMs (in /etc/pve/qemuserver/ ) and move the line for the particular virtual drive from the one machine to the other. The VM-id in the name of the virtual disk should...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.