Wrong guest memory report by balloon with fresh PVE 8.x and fresh Windows 2019 as guest
- Fresh PVE 8
- Fresh Windows Server 2019 as guest
- No software but latest Virtio drivers (0.1.229) + balloon service + qemu Guest Agent
I made one of my nodes NFS server (10Gb, NFS exported from ZFS storage with SSD disks)
I connected 12 hosts to that Node (NFS storage) with default NFS settings (v.4.2)
nfs: DS-254-NFS-B
export /tank254/datastore/slave/nfs64k/id254
path /mnt/pve/DS-254-NFS-B
server...
You can follow this tutorial with some changes (journaling -> snapshot) with some help from posts on this forum (try search).
We've managed to set it up. It works no really smoothly but works
https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring
Thanks for the reply. Yeap, I'm fully aware about the issue with IP setting changes (mainly in Windows) - faced it several times.
The question is - is there any significant reason to do so and keep VM "up to date" ?
For example: may be there are relations between latest Virtio drivers (for all...
Just wondering, is there any reason to increase VM machine version to the higest one after major PVE upgrade?
In terms of VM stability/perfomance
Thanks in advance,
With respect to blog at ceph.io
https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/
Which memory allocator and libRBD are used in Proxmox?
Are the suggested optimizations in article above suitable for PVE+CEPH optimization?
Totally agree. If PVE maintainers managed to reproduce the issue they at least should get some more details on it and could provide them to pve community. It's not the rare issue. Lots of PVE users are facing it and it has already become a nightmare for dozen of them
In one of out installation we use small enterprise class SSD (or NVME) in each server to store VMs swap. Each VM (Windows) uses common shared storage (NFS) for system and data disks and local store (ext4) for additional (swap) disk.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.