Proxmox VE 7.2 released!

This feature (possibility to migrate without any problems from different CPU generation) was very very unique cool feature, it will be a pity to lose this feature....
First off, that "feature" was never really a thing guaranteed to be supported, there where always CPU combination that caused trouble, more so from different vendors but also intra-vendor especially on different generation, there are some CPU side effects that just cannot be abstracted away in migration (often working most of the time, but with bad luck breaking occasionally).
That's why we recommend using homogenous enterprise hardware for all production setups.

Secondly, it still works for lots of different model combinations, so it's not like getting impossible to do.

Also, we naturally backport a fix, if any, but just going from some promising commit messages will give you a big set of patches to cherry-pick, possibly bringing a lot of regression with them as a) sometimes that just happens and b) they were developed for a different kernel base. That's why I asked if you already had good experience with the patch that you linked, as I noticed that one too a few weeks ago but stopped bothering once I saw the fpu module changes that may break existing (recommended homogenous) setups, so putting in effort there must be waranted. Maybe I'll give it another try upcoming weeks.
 
First off, that "feature" was never really a thing guaranteed to be supported, there where always CPU combination that caused trouble, more so from different vendors but also intra-vendor especially on different generation, there are some CPU side effects that just cannot be abstracted away in migration (often working most of the time, but with bad luck breaking occasionally).
That's why we recommend using homogenous enterprise hardware for all production setups.

Secondly, it still works for lots of different model combinations, so it's not like getting impossible to do.

Also, we naturally backport a fix, if any, but just going from some promising commit messages will give you a big set of patches to cherry-pick, possibly bringing a lot of regression with them as a) sometimes that just happens and b) they were developed for a different kernel base. That's why I asked if you already had good experience with the patch that you linked, as I noticed that one too a few weeks ago but stopped bothering once I saw the fpu module changes that may break existing (recommended homogenous) setups, so putting in effort there must be waranted. Maybe I'll give it another try upcoming weeks.

Thanks for your hard job ;) maybe you can think in future about Proxmox feature, similar like have vmware in the HA with EVC ? enabling vmware EVC put cpu in compatibility for all hosts (same cpu instructions depends to lowest cpu) and migration still possible between different generations hosts. It is just idea.
 
Last edited:
Witam, dlaczego nie mogę zainstalować proxmox na dell wyse 5060 instalacji pokazanej na zdjęciu?
Czarny ekran i nic się nie instaluje, dlaczegoVID_20220806_122837_exported_15816.jpg
 
The GUI will always do an apt dist-upgrade as that's the required way to upgrade Proxmox VE in any way (major or minor).

For the record: But note that it's normally safer to use SSH for bigger upgrades as the API pveproxy daemon, which is providing the web shell for the upgrade, gets restarted too; while that is handled gracefully for minor updates, a major upgrades (6.x to 7.x) that can cause trouble.
I have in the past lost network connection doing an full upgrade. Having had to clean up the resulting apt mess, I now always run upgrades (of any sort on any Linux) via GNU Screen or tmux. That way if I lose connection I can reconnect over ssh and re-attach my previous session. An alternative safe method is to run it at the physical console if you happen to be near to the host. In fact the console shell in the GUI should possibly run using a terminal multiplexer for the same reason.
 
something strange with memory and zfs after upgrade with latest kernel : 5.15.39-4-pve

Code:
root@srv-proxmox:~# free -m
               total        used        free      shared  buff/cache   available
Mem:           64218       24004       39956          70         256       39585
Swap:              0           0           0

root@srv-proxmox:~# sync; echo 3 > /proc/sys/vm/drop_caches
root@srv-proxmox:~# free -m
               total        used        free      shared  buff/cache   available
Mem:           64218       22789       41179          70         249       40804
Swap:              0           0           0

root@srv-proxmox:~# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID   
     100 vm1     running    1536              32.00 5923 
    101 vm2      running    4096              32.00 4140 
    102 vm3      running    8192             200.00 5945 
    103 vm4      running    1536              32.00 5918

Total: ~15 GB for KVM VM
whereis 7 GB RAM after sync; echo 3 > /proc/sys/vm/drop_caches ???

Ok back to the 5.13 kernel: proxmox-boot-tool kernel pin 5.13.19-6-pve

reboot:
after few days:

Code:
root@srv-proxmox:~# sync; echo 3 > /proc/sys/vm/drop_caches
root@srv-proxmox:~# free -m
               total        used        free      shared  buff/cache   available
Mem:           64219       15314       48750          70         154       48328
Swap:              0           0           0

ok memory releasing now is working with older 5.13 kernel...
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!