In meinen Augen ist dies hier: https://www.truenas.com/community/resources/absolutely-must-virtualize-truenas-a-guide-to-not-completely-losing-your-data.212/ weiterhin korrekt. Ich habe noch keinen Gegenbeweis gesehen, auch wenn SCALE nun auf...
Here are other threads about the same topic:
https://forum.proxmox.com/threads/help-me-to-understand-the-used-space-on-zfs.47934/
https://forum.proxmox.com/threads/zfs-space-inflation.25230/
Ja klar.
Das liegt nicht am Proxmox Installer.
Wie installierst du denn? Per Bildschirm direkt angeschlossen oder per Remoteverbindung mit einem BMC?
Eventuell mal ein Kennwort versucht, welches auf deutscher und englischer Tastatur gleich ist...
Yes because swapfiles on ZFS datasets or ZFS volumes are problematic see https://pve.proxmox.com/wiki/ZFS_on_Linux#zfs_swap
If somebody wants to have swap with ZFS he should use a dedicated partition or using zramswap (since then the swap won't...
You might want to read the best practice pages for Windows guests, it's important to have correct drivers and machine settings for sufficient performance:
https://pve.proxmox.com/wiki/Windows_11_guest_best_practices...
Hey kleumo,
maybe on idea is to use namespaces, one for the snapshot mode backups and one for the stop mode backups and configure different prune options?
You can place the QCOW in appropriate directory ( /var/lib/vz/images/vm-101 https://pve.proxmox.com/wiki/Storage:_Directory). Name it appropriately, ie vm-101-disk-10.qcow2. Then you can " qm disk rescan --vmid 101". This should pick the disk up...
The good news is that now on test installation everything runs on 1st attempt without any complicated settings.
On the bad side the windows installation (moved from physical machine) is extremely slow and has very slow response (lagging). It is...
Its worth revisiting what ceph is and how it works.
ceph is software defined storage, which is to say there is an algorithm and rules. In a normal virtualization workload the pool rules look like this:
replicated size 3 shards (members) in a pg...
Well..., I disagree.
One major feature is to self-heal damaged blocks (detected during "scrubbing"). That feature requires some redundancy to work - so it does NOT work with a single device. (There is an option to store each and every block...
Well TrueNAS also allows to run VMs or LXCs with it's newest release so I would expect way better performance if you setup PBS inside a TrueNAS VM or a LXC. For LXC there is a writeup in this forum...
Dann geschehen (früher oder später) böse Dinge. Der Ceph-Pool ist jedenfalls sofort degraded - und bleibt es auch dauerhaft. Es ist ja kein Reserve-Node vorhanden, der die Daten (size=3) aufnehmen könnte --> keine automatische...
ZFS works with a few GB. I doubt BTRFS works fine without any Ram...
The historic rule (1GB Ram per 1TB disk and/or "50% of Ram") was dropped a long time ago. The current default maximum ARC for PVE is "...clamped to a maximum of 16 GiB.", from...
Hardware Raid is the "classic" approach.
Nowadays that one lacks some sophisticated features: https://forum.proxmox.com/threads/fabu-this-is-just-a-small-setup-with-limited-resources-and-only-a-few-disks-should-i-use-zfs-at-all.160037/
PS...
Followup question:
Does your setup really need eight machines? Or do you have eight users with one VM each? If the latter is true: can't they use the same VM? In that case you could give it 32 GiB to let two users run that 16 GB job at the same...
For reference of the mentioned dynamic memory managment and zramswap options:
KSM and balooning:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#kernel_samepage_merging
https://pve.proxmox.com/wiki/Dynamic_Memory_Management
zram swap...
I fear ballooning works even worse and simply takes away memory (withough negotiating with the OS inside the VM) from all VMs (if they have the same number of shares) and in this case will only result in all VMs having the same amount (which is...