Bonding requires support from the switch.
You have to configure static lag for balance-rr.
Only active-backup works without but it's fail over no bonding.
In average TBW is exceeded by 3-5x times before errors occur.
No need to start over, just add second disk, clone partition layout from old disk and randomize uuids.
Add zfs partition to existing pool to convert it to raid1 and let it resilver.
Setup esp sync for efi bootloader...
Since the destination has 0% loss there is no problem.
What happens if you directly ping the pve node, 0% loss I assume.
Dropping low ttl packets isn't a bad thing.
ZFS doesn't care about that, it writes information directly on the disk and scans all disks on startup.
Pools will automatically be mapped even if id's change.
Du solltest das in die cron Datei schreiben ^^
Kopier einfach folgendes ins Terminal Fenster:
cat << 'EOF' >> /var/spool/cron/crontabs/root
@reboot sysctl -w vm.swappiness=1
EOF
You have to calculate "used" + "buff/cache" because that's what the vm actually uses. And it's exactly what proxmox reports.
Disabling cache is not possible in linux, just don't over commit when using vm's.
https://www.linuxatemyram.com/
I did benchmarks and pbs runs on basically everything.
I use it on my synology nas in a vm.
1GB RAM and 1vcpu per pve node works fine.
CPU is barely used and drive bandwidth is maxed out.
Efficiency is the same. Depends on the workload.
Do you want to run many vm's (3700x) or have good single thread performance (5600x) ?
https://www.cpubenchmark.net/compare/AMD-Ryzen-7-3700X-vs-AMD-Ryzen-5-5600X/3485vs3859
I only use windows for testing therefore I don't care about breaking changes.
If it is like now it's a set and forget option. The option would never be changed by end users.
I'm fine if it is just a config file option. I assume if I set machine type back to "q35" it will revert the change ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.