After upgrading from Proxmox VE 7.1 (kernel: 5.13.19-6-pve) to 7.2 (kernel: 5.15.60-1-pve) our NFS shares with version 4.2 aren't working anymore. Choosing version 4 in the GUI makes the NFS shares working again but it mounts them over NFSv3!
# pveversion --verbose
proxmox-ve: 7.2-1 (running...
It's not for more I/O. I already wrote this:
And this:
I'm looking for the solution that comes closest to 100% uptime.
Thanks for your suggestions anyway.
1 and 2) I don't think this is a viable option. It's one of our customers on our platform, we can't modify our whole PVE cluster setup for just this customer. Our new setup consists of 5 PVE nodes with NVMe disks for Ceph VM storage. That means all our VM's are assigned a virtual disk on our...
It's for the web servers. For VM storage we use Ceph RBD.
My experiences with OCFS2 aren't that great, I can't recall the problems I had but as far as I remember it had something to do with locking. Besides that, our Ceph cluster is configured with 3 replicas and it feels very inefficient to use...
ATM we're running a webserver cluster for a customer with 3 web vm's and 1 NFS fileserver VM for the doc root and dynamic user files, uploaded through the app of our client. This NFS server is the SPOF in the setup and we want to make the file storage HA. Off course the current NFS vm is on pve...
We have a misunderstanding, forget this. The only thing I should have asked is if you can make the field Configuration->Mail Proxy->Options: DNSBL Sites less strict in checking, I would like to add the following entry: zen.spamhaus.org*2 bl.spamcop.net b.barracudacentral.org psbl.surriel.com...
Seems fair. I corrected our settings and everything is working fine now.
Yes, I asked about DNS based whitelists. But you said:
DNS based blocklists are also checked by SA but it helps to use them earlier in the process. SA uses the same logic as I mentioned, let RBL's add points and WL...
Thank you for your answers!
Thanks! Now it works but I must say it isn't really clear what values to use here, I read the deployment guide and understand the need of setting is but you talk about incoming and outgoing and in PMG it's called internal and external SMTP port. Besides that, I...
Proxmox Mail Gateway 5.0 looks great! Good work.
During my tests I found a few things I would like to share/ask.
I'm using PMG for incoming mail filtering only. I've setup a domains MX record to the PMG and added the domain to "Relay Domains" and setup a Transport to the destination...
Good news, @wolfgang 's patch had been reviewed and signed off. See http://lists.nongnu.org/archive/html/qemu-devel/2017-10/msg03318.html and http://lists.nongnu.org/archive/html/qemu-devel/2017-10/msg03339.html
I suppose this means that we can expect it to be included into the next QEMU...
You can easily go back to the old values by:
echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
echo madvise > /sys/kernel/mm/transparent_hugepage/defrag
Or simply reboot your host.
My problem is solved with a patch from @wbumiller I asked for an updated pve-qemu-kvm package for you...
YES, it works!!! Thank you! I repeated the qemu build twice and repeated my tests to be sure. I am very grateful, thanks for your help and fix!
I'm curious, now you know the problem, the cause and the solution, can you think of a way to trigger the problem on your hardware? It still seems I'm...
The bisect was finished and the only missing commit was the one which introduces the error. I'm not familiar with git, I did this to apply the commit:
root@test:/usr/src/qemu# git cherry-pick 3b3b0628217
[detached HEAD beb0fb61a2] virtio: slim down allocation of VirtQueueElements
Author: Paolo...
Thank you for the suggestion. I tested it with a new git clone and removed --enable-jemalloc from the configure command. My test crashes my VM at the first run. I'm not having performance issues BTW, with the test in my first post my VM crashes, in Debian it's after the first run in most cases...
I did some additional tests with a default PVE 5.0 install.
VM config has NUMA enabled and I use vCPU (1 socket, 8 cores, 2 vCPU's). At Options we add Memory and CPU to Hotplug.
I tested this VM config with a default Debian Stretch install (memory hotplug in guest enabled in /etc/default/grub...
I didn't want to hijack this topic for my own issue :-) Please see this post for the answer on your question and additional details: https://forum.proxmox.com/threads/vm-crash-with-memory-hotplug.35904/#post-181622
I will post some additional test results in a few minutes there. Thanks!
Thanks. Hardware is unrelated. I'm using Dell PowerEdge R310, R320, R420 and R610. In R610 I use Intel XEON X5650.
My problem doesn't occur also when I change the SCSI Controller type to default (LSI 53C895A) and use SCSI as hard disk bus. VMWARE PVSCSI also works. My git bisect revealed a...
@Andreas Piening and @micro
What hardware are you using? Brand of server and CPU model?
I'm experiencing a strange issue with QEMU since 2.6, all versions before behave correctly but from 2.6.0 I have problems. Not the same as you but it might be related somehow because IDE solves it for me...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.