This CAN be done, but only if you use MACOS as your parent OS and virtualize windows and linux- you can use parallels, vmware (which is free now btw) or UTM. PVE is not meant for desktop use as you describe.
FWIW, I observe the opposite :).
Here you are my observations (posted in https://forum.proxmox.com/threads/proxmox-is-backing-up-my-entire-physical-disk-rather-than-just-used-space.165057/post-813185 ):
"usually subsequent backups of running...
its an innocent enough question- but there are a lot of gotchas you need to consider.
Clusters are made up of 3 elements- compute, storage, and networking. lets touch on each.
COMPUTE:
- Dell R630 is a 10 year old platform. as such, it offers...
Why do you care about source IP? I'm talking just blocking all inbound to port 8006 on the host IP and leaving 22 open. It will then work like you want without hacking around on PVE. It might break stuff, but so might your way.
I've seen...
Well I feel foolish, I followed the docs and it worked!
echo 'LISTEN_IP="127.0.0.1"' >> /etc/default/pveproxy
systemctl restart pveproxy
ss -lntp | grep 8006
LISTEN 0 4096 127.0.0.1:8006
And now I can do:
ssh -i .ssh/id_ed25519...
Wie kommst du darauf? Man kann schließlich den backup-client auch auf baremetal-Linuxservern, vserern oder Workstations verwenden, die können ja nun nicht zwingend Virtualisierung aktiviert...
it is good to change it to something that supports the newer drivers, in my experience too linux can get pretty picky of client drivers do not match the host exactly. lol
they do bring some improvements, since optimizations and all work on any...
I have achieved moving vm's between existing datastores.
* create a remote "localhost"
* add sync job on the target datastore, pulling from localhost's source datastore with a group filter only covering the desired vm
* run-now the sync...
i have it down to three? potential 'fixes', ordered by guessed probability
hugepages (seems needed, plausible cause)
vmware sata driver issues vs. kvm q35 sata (switch esxi to nfs datastore, hdd and ssd backed)
clean / careful cpu numa pinning...
Well I feel foolish, I followed the docs and it worked!
echo 'LISTEN_IP="127.0.0.1"' >> /etc/default/pveproxy
systemctl restart pveproxy
ss -lntp | grep 8006
LISTEN 0 4096 127.0.0.1:8006
And now I can do:
ssh -i .ssh/id_ed25519...
nfs is also a possibility and supports snapshots if you use qcow as format for the vm discs. Some people prefer this to ZFS for small clusters due to being "real shared storage" instead of the asyncronous nature of zfs.
The whole discussion around storage options seems to be needlessly around filestore.
first class storage citizens for PVE is ZFS (standalone) and Ceph (Cluster.) reusing existing iSCSI/FC SANs necessitates tradeoffs- PVE means no snapshots (this...
Ok then I missunderstood @david_tao I though he meant that one could combine HW raid and zfs on PERCs:
I already knew that modern controllers can switch between HBA- and RAID-Mode