Hey everyone!
I've put together a community fork featuring several enhancements that may be of interest to those working with custom QEMU configurations. Please note that it has not been fully tested yet. With this fork, you can use custom OVMF...
Hi there,
first of all, this tutorial is mainly addressed to beginners. As usual with Linux based OS, you can do nearly anything you want. This is only one solution and it does not claim to be the best. There are thousands of others ways, that...
Hello everyone,
wanted to share a very positive experience finally getting a HX370 with 64GB of RAM working as I hoped when building it 6 months ago. Major kudos to all the devs involved.
The result is a controlled multi-agentic tool (OpenKIWI)...
Is it possible to get the Intel e1000 NIC working in Windows NT 4.0 Server?
I've been messing around for a good amont of hours, trying an original Intel CD I found on archive.org and PRONT4.EXE - archived from a dead Intel link.
Unfortunately I...
@LBP321 you DM me with a question. Please don't, I won't answer questions via DM. To me that is against the whole point of a forum.
You asked me where to find the volblock size.
The answer to your question is just a google search away.
zfs get...
The CRUSH map knows where each OSD is (zone, host, root). The `localize` policy uses that topology to compute distances. But the client (the QEMU/librbd process running on your PVE node) is not in the CRUSH map — it's external to the cluster. The...
Thanks.
After using the right search terms it seems there is /was? some easier method than creating a token and issuing the REST command from the node itself (1 step less):
pvenode cert set <cert> [<key>] [--force] [--restart]
Found via...
Using PMG as a smart host, the issue in running into it the scan on attachments on 10023 which causes a reinjection, when I remove the content filter scan, pmg acts as a transparent host and it works fine without being able to apply any filtering...
I thought it was quirky that Proxmox offers all these LXC containers, but they do not have Turnkey for their native-support monitoring systems, Grafana and InfluxDB.
That really seems like a footgun to me. WTH?
I wanted to see this Metric Server...
Replacing a drive from where you boot up requires some additional specific steps. For example the partition table is different from a pure ZFS pool member.
It is documented here...
I might have found where the pb can be:
When the shared is umounted:
root@pbs:/mnt# ls -l
total 0
drwxr-xr-x 3 root root 23 Mar 21 09:11 datastore
drwxrwx--- 2 backup backup 6 Mar 20 12:33 nfs
When the share is mounted:
root@pbs:/mnt# ls...
Glad to hear. For automated snapshots I use this (via crontab): https://github.com/Corsinvest/cv4pve-autosnap
My backups are scheduled via PVE. I recommend PBS as target here.
I do not know what "988:988" stands for in your setup - but it is not the usual "backup" id.
For me it helped to set the "backup" user as the owner: chown -R backup:backup /mnt/nfs . For an NFS mounted folder with the name "dsa" on a PBS I can...
@SteveITS: for krbd, `cache=writeback`/`cache=none` has completely different semantics. With librbd (the QEMU RBD block driver), those options control whether librbd's own write-back cache (`rbd_cache`) is enabled — that's what my earlier post...
Just chiming in to say I equally had these issues, I had 3 x Intel NUCs in an Akasa case and had no issues with Proxmox 8 but then I updated to 9 and I started having these weird 'lock ups' where the NUCs were still on but became unresponsive...
So I made the cardinal sin of form posting of forgetting to come back after the burn in test went well...
I found that I had not actually updated to the latest bios on both, so I gave that a shot.
Once I updated to the latest, everything was...
In general network storage isn't a good fit for PBS see https://forum.proxmox.com/threads/datastore-performance-tester-for-pbs.148694/ and https://pbs.proxmox.com/docs/system-requirements.html#recommended-server-system-requirements
Is your NAS...
Thank you for pointing that out. I only wanted to back up the bootdisk (16GB), not the 1TB mount HDD. All I had to do was click on the LXC -> Resources -> Mount Point -> Edit button on ribbon -> Uncheck Backup.
Probably, the cleanest solution for per-node config in a PVE cluster is `crush_location_hook`: a script that outputs the correct location based on the hostname. You set the hook path once in the shared ceph.conf, then deploy the script to each...