Hi,
(Basicaly you have to get RDMA working first. Look here: https://enterprise-support.nvidia.com/s/article/howto-configure-nfs-over-rdma--roce-x)
it took me a while to find a way how proxmox could use NFSoRDMA. First I tried to change the storage options in /etc/pve/storage.cfg from
nfs...
While running low latency tasks in VMs, you may want to pin your vcores to physical cores to not loose caches and avoid moving tasks to cores which are running on lower frequency first when moving to them.
The existing pve_helper scripts managed to pin the qemu CPU threads to a "CPU pool"...
Hi,
I added a cache drive to my pool which has secondarycache=none attribute.
pool: spinning
state: ONLINE
scan: scrub repaired 0B in 04:40:29 with 0 errors on Sun Oct 9 05:04:31 2022
config:
NAME STATE READ WRITE CKSUM
spinning...
Hi,
I'm running a debian10 lxc container and want to set the ipv6 token. But when running
ip token set ::xxx dev eth0
inside the container it returns
Error: ipv6: Router advertisement is disabled on device.
But actual it does get an IP from SLAAC.
inet6 xxx/64 scope global dynamic mngtmpaddr...
How do you build pve kernel in parallel?
The -j flag on make ist completly ignored and building the kernel on a low clocked 32core epyc takes like forever (but well, could use 64 threads instead, which should speedup build times enormously)
There seems to be an "older" bug in pve - when I enable kvm_amd avic, proxmox would add x2apic flag to and EPIC cpu (which is and intel cpu flag, so no wonder it cannot work)
More information:
https://bugzilla.redhat.com/show_bug.cgi?id=1675030
I could fix this by my own, so please link me to...
According to this:
https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu
due the lack of ethtool, how would I enable Multiqueue on virtio nics on Windows, and BSD (OPNsense)?
You mentioned here that PINCTRL_AMD will be disabled, but it's not. Neither in 4.10.1-2-pve nor in 4.10.5-1-pve, so I alwas have to compile the kernel by my own, but this kinda sucks. Could you please change this parameter in your kernel config?
pve-zsync gives me following error:
pve-zsync sync --source 101 --dest 192.168.1.11:poolWDz1/vm-101-disk-1 --verbose --maxsnap 2 --name test101 --limit 512000
Vm include no disk on zfs.
zfs list | grep vm-101
rpool/data/vm-101-disk-1 1.98G 632G 1.98G -
cat /etc/pve/qemu-server/101.conf...
Was this patch already backported to pve kernel? If not, will it be in future?
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=08b259631b5a1d912af4832847b5642f377d9101
Regards
Hi,
I want to try installing proxmox on sdcard. Because of the lesser lifetime of sdcards compared to HDDs or SSDs I want to make as less writes on it as possible, so I'm thinking of setting mountpoints of /var and /tmp to other disks. Any other directories which should be mounted somewhere...
Hi,
how can I set a different sector size for zfs on proxmox installation? As I read somewhere it's always set to ashift=12 aka 4k blocksize. I want to set it to 8k (ashift=13) and on another node to 32k (ashift=15?).
Hi,
I noticed that while there is heavy load on file transmitions on zfs (like restoring a VM) the ksm process is slowing down the whole system heavily.
I reserved 35GB RAM for zfs, and forsure it's the task of ksm to scan the memory pages, but it should not scan the zfs mem-cache-pages.
Any...
Hi,
I'm wondering when pve-qemu-kvm 2.7 will be released? Are there made any changes to the qemu source at all? So if not, could i just build http://wiki.qemu-project.org/download/qemu-2.7.0.tar.bz2 and use it?
I'm asking cause in 2.7 there were some changes included which lets me passthrough...
Workaround:
Install Debian Jessie
Add Proxmox Repositories as per wikihttps://pve.proxmox.com/wiki/Install..._Debian_Jessie
Set Debian Kernel 3.16 as default (or select it during boot in grub menu)
Enable PCIE Passthrough as per wikihttps://pve.proxmox.com/wiki/Pci_passthrough
Use OMVF instead...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.