This request fpr help on keeping two vm synchronised between a proxmox server and a laptop VM is about the following situation.
Curious to learn what you are thinking
A VM is used on a laptop during off-net tasks, in order to have some redundancy it would be of interest to also have...
I've not observed this but it seems for some reason the machine rebooted without the most recent linux image package completed installation.
After messing with the VM configuration and mounting the disks i came to the conclusion the system was also intact. So did what i should...
Though i do not know how to fix this, the data is intact.
ZFS disk for zfs are found under /dev/zvol/<poolname>/data/...
these are actually symbolic links to /dev/<devicename> look for the vm-<vmid>-<partid> and use `ls -l vm-<vmid>-<partid>*` to identify the actual device name under...
I think i've outdone myself here. For an encrypted VM which had uncertain memory requirements i chose to work with changing CPU configuration, memory ballooning and hotplug, enabling 1GB pages for the CPU, NUMA
Despit multiple reboots in multiple configurations this now fails to recognise...
Thanks. Though i love its featureset, I became hesitant of zfs as it consumes a lot of memory for the small scale VM servers i build. I hope the people who distribute the Proxmox ISO take a moment to enable partitioning sooner or later, it is needlessly aggressive to claim entire disks.
Is there a way to not wipe the entire disk on installation ?
I seek to preserve a pre installed microsoft windows partition and boot it as a VM.
I've done this before using Qemu and have a specific requirement to do so.
Having had a quick look at https://www.kernel.org/doc/html/v5.10/admin-guide/kernel-parameters.html i did not find kvm=off should have impact on kvm, it is most likely an nvidia specific boot parameter. If you do not experience performance issues and passthrough works, stick to it.
just tested with a different win10pro VM, same problem, changin the CPU results in a boot loop, changing it back to kvm64 fixes it. Same for both.
pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve)
There is one ms windows vm which behaves quite peculiar. Changing the CPU away from anything kvm64 (> host) results in an inability to boot. Even a reboot into safe mode use msconfig does not work as per usual.
Despite having configured a windows10 .iso to boot from the 'press any key to boot...
Running an IDS VM i realised i only see broadcast and such. What is the preferrable way to create SPAN ports so the IDS VM can monitor all traffic on the virtual networks and physical interfaces ?
I assume to monitor all traffic on the physical interfaces this is easy but VM to VM...
Ehr. Yes, indeed. There exists the nvme device i cannot find back elsewhere.
It has partition assigned to them which i assume were made by ZFS.
actually shows it was assigned a label and ZFS partitions
zpool status show the 3 disks i already have in place, not the newly added disk.
errors: No known data errors
zpool import shows two pools i don't know but again nothing out of the ordinary
i can see the nvme disk in the proxmox webui, but i cannot find it back anywhere else
When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS.
When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool.
This is in the back of my mind actually. Linux is infamous for not providing user sensible memory reporting. To consume all possible RAM is 'by design' on unix systems, however, to report it is a different matter. Linux systems free, VIRT, MEM%, buffers, cache are essentially used all the time...
Interesting. I only see that temporarily when i restart all VM.
Things i will try in the future is to
stop some non-essential VMs which i suspect may play a role in this behavior
disable the ballooning service in the VM if it is running without ballooning enabled
learn more about...
Yes, this looks very similar to what i'm observing.
Reasoning is ZFS reserves half of available RAM, looking at the memory consumption i have not found such to be so. Also, in your cases that would put the RAM consumed at between 57-58GB usage.
What i'm considering is KSM may actually be...
To my understandig this is all because of configuration issues. You may have set a default which now negatively affects performance.
Also, this is exactly what kvm=off means. Literally ever single instruction is emulated now, which is about as fast as it can go without kvm.
the "issue" is back, no machine is running with ballooning enabled. edit: Note the server was not restarted, just all the VM, as such ZFS memory consumption is possibly not part of the growing memory consumption.
Afaik it is not because of the memory reservation reported in MEM but that...