Search results

  1. P

    Move VMs between Proxmox clusters

    Simple to do offline using pve backups: - create a dataset on a NAS (or on one of your PVE clusters) and export it using nfs - create a new PVE storage pool on each PVE cluster that both mount this nfs filesystem. Set this pve storage for dumps. - do a backup of the vm you want to migrate using...
  2. P

    Proxmox install on Minisforum ms-01

    Install the microcode update after installing Proxmox. The instructions are linked in my initial reply above.
  3. P

    Proxmox install on Minisforum ms-01

    Make sure you install the Intel Microcode updates. Instructions about halfway down the page here. These instructions are for PVE 7.x but they apply the same way to 8.x. This is important because (a) the more recent microcode improves stability of VMs on Intel Big/Little architecture CPUs used...
  4. P

    Proxmox 8.2 / Kernel 6.8 breaks iGPU passthrough for UHD630

    Yes (sorta). Pinned kernel 6.5. More seriously - I get the impression that this will not be fixed in the 6.8+ kernel stream anytime soon. Something got badly broken and looking at other message streams/email lists it appears that this is going to take a while.
  5. P

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Upgraded to kernel 6.8.8-1 and the memory leak appears to be fixed.
  6. P

    Proxmox 8.2 / Kernel 6.8 breaks iGPU passthrough for UHD630

    iGPU passthrough using PCIe passthrough for an Intel UHD 630 (i9-10900) works perfectly on Proxmox <= 8.0 and Kernel 6.5.x. But after upgrading to Proxmox 8.2 and kernel 6.8.x the whole system just hangs. Can't even get good logs to post. Reverting to the kernel 6.5 boot environment...
  7. P

    Remote access Zigbee/Zwave controlllers for Live Migration (High Availability)

    Your Zigbee/Zwave radio device (USB Stick or whatever) is always going to be a single point of failure for you. So even if you get the z2m application somehow "portable" for HA you are still down if you lose the z-device itself. You won't ever achieve HA using the existing Z protocols as they...
  8. P

    Random kernel panics

    Give it a try with "intel_idle.max_cstate=1 processor.max_cstate=5". This setting seems to have given my AMD based system stability while maintaining reasonable power usage. Based on the open Linux bugs it appears the cstate >=6 is where troubles start.
  9. P

    Proxmox Mystery Random Reboots

    Have you tried disabling C-state 6 (and lower)? There seems to be a running issue with random restarts using AMD systems and Linux 5+ kernel. All of them open and describe similar symptoms. Disabling C-state 6 is suggested in few of them as stabilizing their systems...
  10. P

    Random kernel panics

    I think I've gotten my system stable but I am really not happy with the side effects of the workaround. Its been running stable for a few days but I'd really like to get at least 30 days running before declaring victory (a pyrrhic victory, perhaps). For background, so people don't have to...
  11. P

    Random kernel panics

    Exactly the same problem with an m90q gen2 with i5-11500. Completely “silent” restarts (no messages, no kernel dump, no evidence of a panic). It seems to happen more frequently when there is heavy write activity to NVMe (using Gen4 M.2 drives). Updated BIOS does not seem to help. Tried all...
  12. P

    [SOLVED] What feature is missing in PVE 7.4 without subscription?

    Support. And access to the more thoroughly tested enterprise repo. That's about it.
  13. P

    Proxmox scalability - max clusters in a datacenter

    Openstack is probably overkill - unless you actually need the bells/whistles it brings. And its multi-site federation is a bit of a bolt-on - it works but is clumsily not native to how OpenStack was originally designed. For multi-site management of a large number small (3-5 node) sites doing...
  14. P

    ZFS arc growing without limit - pve 7.3.3, kernel 5.15.74.1

    I can do that - but as I've already updated zfs.conf to force the ARC limit you won't see the condition when it was causing problems. I'll need to find a good time to revert one of the servers and see if I can get it to reproduce. I really offer my post as a caution to others - explicitly...
  15. P

    ZFS arc growing without limit - pve 7.3.3, kernel 5.15.74.1

    Over the last few days I had some odd failures and I started monitoring memory use much more closely. I've noticed that on all of my servers that do not have explicit arc limit set the ZFS arc growing to consume 100% of ram and then I start losing processes due to OOM. I don't know exactly when...
  16. P

    Help Moving Storage to ZFS // Docker not working!

    You have to tell docker to use its zfs storage driver. In the file /etc/docker/demon.json add this: { "storage-driver": "zfs" } Then restart the docker daemon. See here for more info.
  17. P

    Mini PC for Proxmox?

    Maybe this? Ryzen 7 5825u, 2x m.2 for storage, supports 64gb & Intel i226-v nics. Seems to check all your boxes. https://www.servethehome.com/amd-ryzen-4x-2-5gbe-intel-i226-firewall-router-for-pfsense-opnsense-proxmox-and-windows/ Edit: just read your last post - I guess this answer is a day...
  18. P

    Chrony vs systemd-timesyncd on PVE 7.2

    Problems of time sync usually occur when one or more of your hosts selects the “wrong” sync source, or there is a problem with that sync source. System-timesyncd uses SNTP it only tracks a single source and if the hosts in your cluster using systemd-timesyncd lock onto different sources and...
  19. P

    PCI nuc

    It definitely not an enterprise class device, but the NUC 12 PCIe card is no slouch either. I9-12900. 3x m.2 slots (PCIe gen4 slots), 10gbe+2.5gbe LAN. 2x Thunderbolt 4. If I was building a business I'd probably use traditional servers. But you could build one heck of a cluster out of these...
  20. P

    Ceph is not configured to be really HA

    In order to remain HA Ceph requires you to supply enough “spare” resource to absorb a failure. You need to have enough disk space free on each host in order to absorb the loss of your largest OSD on that host. Further, in a cluster with replica 3, you really should have at least 4 hosts in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!