This works for me : http://enricorossi.org/blog/2016/intel_sr-iov_on_Debian_Stretch/
My udev rule is saved here /etc/udev/rules.d/99-sriov.rules and looks like...
Using the profile provided (After matching the profile name to the file name mentioned) and reloading apparmor/applying to the container still gives me significant problems that I'm not encountering on an lxd system with 'security.nesting true'. Also attempted the 'default with nesting' profile...
Upstream LXC/LXD has had a 'security.nesting' option for over a year that reliably enables LXC to run other container runtimes underneath itself without using an unconfined apparmor profile.
Is there an equivalent lxc.conf option in Proxmox?
Considering the current advice is to run docker/application containers inside KVM and that defeats the performance reasons for running application containers to begin with, I don't see where the Proxmox infrastructure fits in for future deployments.
I'm hopeful that Proxmox will reconsider...
With the shift to application container runtimes (rkt, runc, containerd, etc) and their standardization with OCI, I'm wondering if Proxmox has a path to any registry/image based application deployment.
LXC is great for an entire machine, but I question the decision to run LXC containers for...
I noticed a lot of commits for Intel NIC module changes on the new kernel build.
The network is a 3x 1GB bonded direct attach with round robin. Literally 3 1gb ports directly connected to 3 1gb ports between 2 proxmox hosts.
On the newer kernel I'm unable to even load df -h because of the...
I have a three node cluster, the 4.15.17-3-pve kernel is freezing NFS client access on one host (Supermicro X8DT3)
The only difference on this server is a balance-rr setup that directly connects to the NFS server (Fully updated proxmox host.)
Rolling back to the previous 4.15.17-2-pve kernel...
Thanks for looking into it, my suspicion is that the .raw file still *thinks* it's mounted from the delayed write on shutdown with async.
I'd assume that on any shutdown task the .raw file and parent directory would have an fsync or fdatasync call associated but I guess not.
So far I haven't...
Interestingly enough, I can manually add a sync after shutdown on the source node and the delay is gone.
pct shutdown 103 && pct migrate 103 lucius && sync && ssh lucius 'pct start 103'
that results in < 3 second migration time.
I'm using async nfs w/ zfs on the backend and a UPS(async can...
nfs server(proxmox w/ same software as source and destination):
exports /rpool/data 172.16.8.54(rw,async,no_root_squash,no_subtree_check)
Container OS: default CentOS 7 (also experienced w/ Ubuntu 16.04) all updates and no additional repos/software added from base
arch: amd64
cores: 4
hostname...
Looks like it's specifically an NFS problem, migration time on shared iSCSI as well as Ceph is < 4 seconds and does not include the multi mount protection warnings. Any thoughts on how I can troubleshoot further or a way to work around the MMP interval?
NFS Shared Storage, LXC templates of varying operating systems all hang for roughly 40 seconds on PVE 5.1 target node with the following when performing a restart migration.
kernel: EXT4-fs warning (device loop0): ext4_multi_mount_protect:325: MMP interval 42 higher than expected, please...
I have a number of OSDs in a 3 node ceph cluster and would like to properly assign only specific OSDs to a new pool. I can find documentation for how to use the class type of the drive in luminous but not for manually selecting arbitrary OSDs.
Thanks
I'm seeing this same issue (LVM2 monitoring and device mapper) preventing a reboot even with 0 VMs/Containers running.
The iscsi target is a freenas box and I'm using multipath round robin.
I spent a fair bit of time on this, but it turns out I forgot about the periodic snapshots I had running on the FreeNAS side. Containers and VMs will both relinquish their space by running fstrim (from the guest) or drive optimization in Windows as expected with thinly provisioned zvols.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.