Hi Tmanok,
thank you for your reply!
I do not have any HDDs, only one NVMe per node holding everything.
I will change PBS from ZFS disk to EXT4 disk for datastore. However I do not understand why writing to PBS is 3 times faster than reading from it while restoring, even if I restore to the...
Dear Tmanok!
I have 2 nodes in cluster with same specs:
each 2x Xeon(R) CPU E5-2673 v4 @ 2.30GHz each 20x cores
8x cores for PBS virtual machine on node #2 although 4x cores would be enough.
PVE: 320 GB each node
PBS: 8 GB although 4 GB would be enough
RAM used is 50% on each, although not...
I can confirm that changing the CPU setting of the PBS VM to "host", I could rise the TLS value from 43 MB/s to 275 MB/s:
Before:
Uploaded 52 chunks in 5 seconds.
Time per request: 98437 microseconds.
TLS speed: 42.61 MB/s
SHA256 speed: 231.30 MB/s
Compression speed: 373.42 MB/s
Decompress...
Dear Wolfgang,
is the "storage bwlimit" functional for LVM-Thin volumes? I get the following error:
# pvesm set local-lvm --bwlimit 40960
400 Parameter verification failed.
bwlimit: invalid format - value without key, but schema does not define a default key
pvesm set <storage> [OPTIONS]
I...
If you find the combination auf CLI commands o resolve without reboot, I would be very happy to know. I have also one node which I do not want to reboot :) Perhaps it has to do something with new kernel and new LXC, than a reboot is necessary in any case.
P.S: Yes it is LVM-thin
What would be the command for doing the move - moving to which location? For storage I have a local-lvm per node and one globally available NFS share.
qm move_disk 115 sata0 ...
Hello Fabian,
thank you for your prompt answer!
The migration is reproducibly canceling when migrating the VMs which have 2 disks attached. The mirgration is working without problems for the first 8GB disk, followed by the transfer of the second disk - which is canceled at approx 50%.
Instead...
Any news about live migration of KVM hosts with local disks?
I am able to live migrate a test KVM machine with success. But if I try to live migrate my production KVM machines with multiple local disks, the migration is canceled in the middle of the transfer of the second disk:
drive-sata0...
The "profile lxc-container-default-with-nfsd" solution also works now with the Debian 9.02 template.
For the record:
Upgrading a Debian 8 LXC container to Debian 9 does not help as far as I tried.
I think that will lead to big problems for everybody who is upgrading from Proxmox 4.4 to 5.0...
I tried again on the 5.0 node:
I created a new CT with a Debian Jessie turnkey template.
I added the "lxc.aa_profile: unconfined".
I started the CT.
I installed "nfs-kernel-server" + created /etc/exports = resulting in the same error
# service nfs-kernel-server start
[warn] Not starting NFS...
Thank you for your quick reply!
I added the "lxc.aa_profile: unconfined" line and upgraded the Debian inside the CT from jessie to stretch.
I shutdown the CT, start again and I still get the error:
Linux nas003 4.10.17-1-pve #1 SMP PVE 4.10.17-18 (Fri, 28 Jul 2017 14:09:00 +0200) x86_64...
I tried this extra aa_profile on a freshly installed 5.0 node without success. Also "lxc.aa_profile: unconfined" in the CT config does not work - which is working on my other 4.4 node.
What is the correct solution for a NFS server inside a LXC container on a 5.0 proxmox node?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.