Hi,
(Basicaly you have to get RDMA working first. Look here: https://enterprise-support.nvidia.com/s/article/howto-configure-nfs-over-rdma--roce-x)
it took me a while to find a way how proxmox could use NFSoRDMA. First I tried to change the storage options in /etc/pve/storage.cfg from
nfs...
How would you get root access? You would still need the root credentials to log in to console?
Or do you mean by shuting down server and get root access via chroot?
While running low latency tasks in VMs, you may want to pin your vcores to physical cores to not loose caches and avoid moving tasks to cores which are running on lower frequency first when moving to them.
The existing pve_helper scripts managed to pin the qemu CPU threads to a "CPU pool"...
Hi,
I added a cache drive to my pool which has secondarycache=none attribute.
pool: spinning
state: ONLINE
scan: scrub repaired 0B in 04:40:29 with 0 errors on Sun Oct 9 05:04:31 2022
config:
NAME STATE READ WRITE CKSUM
spinning...
I currently testing this. I got pNFS working (for a while) by doing it this way:
modprobe nfs_layout_nfsv41_files
modprobe nfs_layout_flexfiles
mount x.x.1.x:/some/path /some/other/path -o vers=4.1,minorversion=1,max_connect=16
mount x.x.2.x:/some/path /some/other/path -o...
The solution is to set
echo 1 > /proc/sys/net/ipv6/conf/eth0/accept_ra (or 2)
inside the container
0 Do not accept Router Advertisements.
1 Accept Router Advertisements if forwarding is disabled.
2 Overrule forwarding behaviour. Accept Router Advertisements even if forwarding is...
Hi,
I'm running a debian10 lxc container and want to set the ipv6 token. But when running
ip token set ::xxx dev eth0
inside the container it returns
Error: ipv6: Router advertisement is disabled on device.
But actual it does get an IP from SLAAC.
inet6 xxx/64 scope global dynamic mngtmpaddr...
Then after about 3 hours of build, it aborts with the following error(s) ....
# Autogenerate blacklist for watchdog devices (see README)
install -m 0755 -d debian/pve-kernel-5.4.41-1-pve/lib/modprobe.d
ls debian/pve-kernel-5.4.41-1-pve/lib/modules/5.4.41-1-pve/kernel/drivers/watchdog/ >...
How do you build pve kernel in parallel?
The -j flag on make ist completly ignored and building the kernel on a low clocked 32core epyc takes like forever (but well, could use 64 threads instead, which should speedup build times enormously)
OK, I found it
add:
push @$cpuFlags , '-x2apic' if $cpu eq 'EPYC';
in sub get_cpu_options
in
/usr/share/perl5/PVE/QemuServer/CPUConfig.pm
Sould be disabled by detecting AMD CPU
There seems to be an "older" bug in pve - when I enable kvm_amd avic, proxmox would add x2apic flag to and EPIC cpu (which is and intel cpu flag, so no wonder it cannot work)
More information:
https://bugzilla.redhat.com/show_bug.cgi?id=1675030
I could fix this by my own, so please link me to...
According to this:
https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu
due the lack of ethtool, how would I enable Multiqueue on virtio nics on Windows, and BSD (OPNsense)?
I came over to this thread because I have seen this on reddit:
https://www.reddit.com/r/VFIO/comments/fovu39/iommu_avic_in_linux_kernel_56_boosts_pci_device/
Because I hardly lack of time and I want to update the kernel of my 6.1.8 proxmox to at least kernel 5.6-rc6, maybe someone can post his...
You are working in that? But well, I thought qemuKVM isn't able to do this in first place and VMware has some magic tricks to passthrough an entire disk w/o passthrough of storage controller.
I think passing through an entire disk as virtio block device is not what thread opener wants to?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.