Is there a way to make them use just the space they are really using? I read alot about trim and discard, but it seems like theres not a easy way-
Any tips? Im trying it myself currently, but yeah if there are any experience you wanna share, I would really appreciate it.
Example: VMware-VM 40GB...
The Proxmox VE 6.3 (Hyper-converged Infrastructure) cluster has a virtual machine running MS Windows 2016.
Characteristics of this virtual machine
VMs with Debian 10 or Ubuntu 20.04 configured according to instructions https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files#Linux_Guest_Configuration
fstrim -av writes that he successfully released about 10Gib
But! qcow2.img size not changed (VM fs size 7.7G and qcow2 size 20G)
Today i run command pct fstrim 100 on disable lxc container and my ssh connection was disconnected.
After reconnect i try fstrim and get error:
mount: /var/lib/lxc/100/rootfs: /dev/mapper/vg0-vm--100--disk--0 already mounted on /var/lib/lxc/100/rootfs.
Ok, trying umount:
# umount -f...
Aus 4x 3TB HDDs habe ich einen RAIDz1 Pool erstellt, wodurch 7.7 TiB verfügbar sind. Den Pool nutze ich u.a. als Storage für zusätzliche Festplatten in den VMs (aktuell nur Dateien von der NAS-VM). Ein df -h innerhalb der NAS-VM zeigt eine Belegung von knapp 3 TB. Die Proxmox...
Used to rent servers (only a few, not that many) in datacenter, all under PVE and run perfectly well.
Nowadays it is good choice to rent SSDs to put VMs on, and so we choose hardware raid with bbu to create mirrors out of SSDs (it is used to be megaraids) but... I never can find a wayto check...
Am I correct in assuming that I will need to create a cron job to run "zpool trim" on, e.g., a weekly basis?
Because the "autotrim" property doesn't appear to be set on zpools by default (neither "zfs get all" nor "zdb" show it), and I couldn't find any existing cronjob or timer that runs "zfs...
Guten Tag Zusammen,
ich habe vor kurzem die Einstellung bei allen meinen Windows Gästen geändert. Jetzt laufen alle mit SCSI Controller VirtIO SCSI und die Festplatten logischerweise an scsi0 mit der Option discard.Auf allen VMs sind die aktuellen Spice-Tools mit dem Qemu Agent installiert...
I have read all posts marked with trim but am still a little confused. Currently I have two PE nodes. Both have PE installed to an SSD. One has two further SSDs for VM disk images (using LVM-thin, discard always on); the other has HDDs which are passed through directly to the respective VMs...
About returning unused space (trim or discard mapped size disk)
I know about use `fstrim` command in guest vm.
But I can not login to the guest customers!
Now, my question is, is there any way to do this from within the host machine (main proxmox server)?
Would it be possible to take an entire disk (256GB SSD) and break it down into smaller partitions, then install rpool on a fragment of the disk only?
Right now the installer for PVE 5.0 beta2 when selecting zfs raid1 from the menu it shows entire disk /dev/sda, any possibility to have the...
is it possible to add a mount option like discard to an LXC RAW storage?
If not, is it save to use something like that to trim runnings VMs from the Host-Os:
fstrim /proc/<PID of LXC init process>/root
Thanks for your help!
Hey everyone !
We were moving disks from a storage to another and noticed that when they arrive on the new storage, the thin provisioning expanded to full space.
Before, when we had a few VMs, we could use the old method to empty the disks (using dd to fill the disk with zeros and deleting...
After reading many posts I am still unclear on the best use of Trim with SSDs.
I have two RAID1 SSDs on a Perc H730 controller which supports trim:
Currently my server boots with...
I've built a new proxmox 4 machine for the small business I work at, which will mostly be running a postgres database and a few web hosts in LXC, in addition to some windows kvm test environments, upgrading from an old Core 2 Quad system running PVE3.4
I'm using a pair of Samsung...