So fixed memory size means the memory is never available for the host or other VMs, even if ballooning is on and the VM doesn't use the memory itself? What are the reasons for using fixed size memory (as the default)?
I checked the tuned profiles that Red Hat distributes in their systems. The profiles "virtual-guest" and "virtual-host" both use vm.swappiness=10. According to this the PVE installer should use a lower default value than 60 (which is the Debian default).
If you can afford the downtime it should be an easy solution to just use the "stop" backup mode. That should stop the DBMS, then the OS and thus give you a consistent state to back up. You may want to tune some things (e.g. GRUB timeout) to make the VM boot faster. @tom (proxmox staff member)...
Are there downsides to enabling NUMA? I'm just wondering why it's not set by default.
As this option is exposed in the GUI I think we should have a Wiki article for it. https://pve.proxmox.com/wiki/NUMA
I can verify the problem and the solution. My CentOS 7 VM would not recognize the LVM configuration if I use SCSI or VirtIO in PVE -- even though it is a SCSI disk in VMware.
I think you misunderstood me @robhost. I had the same problem as @bizzarrone as the restore process gets stuck if I try to restore a qcow2 kvm backup on a lvm-thin datastore. No share access involved - just local to local.
Other users seem to have the same problem...
I had that problem too but it took me longer to realize that it's lvm-thins's fault. :)
https://forum.proxmox.com/threads/vzdump-stuck-cant-kill-vzdump-task.25711/#post-144777
I had the same problem as @bizzarrone and it took me a lot of time and headache..
Restoring a qcow2 image on lvm-thin hangs and takes forever. That's bad because the default storage on PVE 4.2 is local-lvm (which is thin) and for new VMs the GUI suggests qcow2.
I originally hat the LTS Dates there but after reading what Wolfgang said I found that LTS is not really oldstable. I hope the someone from the core team will comment on this. But let's just give them some days to discuss internally. :)
enjoy the weekend!
Wheezy is not oldstable anymore - it's LTS by now. Does that mean PVE 3.x ist EOL?
I added the info to the FAQ page on the wiki. Please be sure to correct it if I got something wrong or you changed your mind.
https://pve.proxmox.com/wiki/FAQ#How_long_will_my_Proxmox_version_be_supported.3F
Hi!
This sounds similar to the problem I've had.
NFS (at least before v4) does not support permissions based on users. Also your error does not sound like a permission problem. Please check:
a) NFS is enabled on your NAS (Control Panel -> Network Services -> Win/Mac/NFS -> Enable NFS)
b) Allow...
That's the solution - thank you!
The is no authentication in NFSv3 and earlier and QNAP does not support NFSv4 yet. QNAP did also hide the NFS Host/IP restrictions well but I finally found them.
Hi sdinet - and thanks for your answer.
I'm not sure if understand your suggestion. I only have the root user of the two nodes configured at the moment. You suggest I create a user named "root" with the same password (which of the two?) on my NAS and give it write permissions on the NFS Share?
Hi!
I added an NFS Share to my PVE Cluster. As the NFS-Server (QNAP NAS) is reachable from different parts of the network I created a user on the NAS for PVE to get write access to the Shared Folder.
Adding the Share works with the Web-GUI but I see no option to set connection credentials. It...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.