Simple to do offline using pve backups:
- create a dataset on a NAS (or on one of your PVE clusters) and export it using nfs
- create a new PVE storage pool on each PVE cluster that both mount this nfs filesystem. Set this pve storage for dumps.
- do a backup of the vm you want to migrate using...
Make sure you install the Intel Microcode updates. Instructions about halfway down the page here. These instructions are for PVE 7.x but they apply the same way to 8.x.
This is important because (a) the more recent microcode improves stability of VMs on Intel Big/Little architecture CPUs used...
Yes (sorta). Pinned kernel 6.5.
More seriously - I get the impression that this will not be fixed in the 6.8+ kernel stream anytime soon. Something got badly broken and looking at other message streams/email lists it appears that this is going to take a while.
iGPU passthrough using PCIe passthrough for an Intel UHD 630 (i9-10900) works perfectly on Proxmox <= 8.0 and Kernel 6.5.x. But after upgrading to Proxmox 8.2 and kernel 6.8.x the whole system just hangs. Can't even get good logs to post.
Reverting to the kernel 6.5 boot environment...
Your Zigbee/Zwave radio device (USB Stick or whatever) is always going to be a single point of failure for you. So even if you get the z2m application somehow "portable" for HA you are still down if you lose the z-device itself. You won't ever achieve HA using the existing Z protocols as they...
Give it a try with "intel_idle.max_cstate=1 processor.max_cstate=5". This setting seems to have given my AMD based system stability while maintaining reasonable power usage. Based on the open Linux bugs it appears the cstate >=6 is where troubles start.
Have you tried disabling C-state 6 (and lower)? There seems to be a running issue with random restarts using AMD systems and Linux 5+ kernel. All of them open and describe similar symptoms. Disabling C-state 6 is suggested in few of them as stabilizing their systems...
I think I've gotten my system stable but I am really not happy with the side effects of the workaround. Its been running stable for a few days but I'd really like to get at least 30 days running before declaring victory (a pyrrhic victory, perhaps).
For background, so people don't have to...
Exactly the same problem with an m90q gen2 with i5-11500. Completely “silent” restarts (no messages, no kernel dump, no evidence of a panic). It seems to happen more frequently when there is heavy write activity to NVMe (using Gen4 M.2 drives). Updated BIOS does not seem to help. Tried all...
Openstack is probably overkill - unless you actually need the bells/whistles it brings. And its multi-site federation is a bit of a bolt-on - it works but is clumsily not native to how OpenStack was originally designed.
For multi-site management of a large number small (3-5 node) sites doing...
I can do that - but as I've already updated zfs.conf to force the ARC limit you won't see the condition when it was causing problems.
I'll need to find a good time to revert one of the servers and see if I can get it to reproduce.
I really offer my post as a caution to others - explicitly...
Over the last few days I had some odd failures and I started monitoring memory use much more closely. I've noticed that on all of my servers that do not have explicit arc limit set the ZFS arc growing to consume 100% of ram and then I start losing processes due to OOM. I don't know exactly when...
You have to tell docker to use its zfs storage driver.
In the file /etc/docker/demon.json add this:
{
"storage-driver": "zfs"
}
Then restart the docker daemon.
See here for more info.
Maybe this? Ryzen 7 5825u, 2x m.2 for storage, supports 64gb & Intel i226-v nics. Seems to check all your boxes.
https://www.servethehome.com/amd-ryzen-4x-2-5gbe-intel-i226-firewall-router-for-pfsense-opnsense-proxmox-and-windows/
Edit: just read your last post - I guess this answer is a day...
Problems of time sync usually occur when one or more of your hosts selects the “wrong” sync source, or there is a problem with that sync source. System-timesyncd uses SNTP it only tracks a single source and if the hosts in your cluster using systemd-timesyncd lock onto different sources and...
It definitely not an enterprise class device, but the NUC 12 PCIe card is no slouch either.
I9-12900. 3x m.2 slots (PCIe gen4 slots), 10gbe+2.5gbe LAN. 2x Thunderbolt 4. If I was building a business I'd probably use traditional servers. But you could build one heck of a cluster out of these...
In order to remain HA Ceph requires you to supply enough “spare” resource to absorb a failure. You need to have enough disk space free on each host in order to absorb the loss of your largest OSD on that host. Further, in a cluster with replica 3, you really should have at least 4 hosts in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.