Though i do not know how to fix this, the data is intact.
ZFS disk for zfs are found under /dev/zvol/<poolname>/data/...
these are actually symbolic links to /dev/<devicename> look for the vm-<vmid>-<partid> and use `ls -l vm-<vmid>-<partid>*` to identify the actual device name under...
Hey
I think i've outdone myself here. For an encrypted VM which had uncertain memory requirements i chose to work with changing CPU configuration, memory ballooning and hotplug, enabling 1GB pages for the CPU, NUMA
Despit multiple reboots in multiple configurations this now fails to recognise...
Thanks. Though i love its featureset, I became hesitant of zfs as it consumes a lot of memory for the small scale VM servers i build. I hope the people who distribute the Proxmox ISO take a moment to enable partitioning sooner or later, it is needlessly aggressive to claim entire disks.
This...
Is there a way to not wipe the entire disk on installation ?
I seek to preserve a pre installed microsoft windows partition and boot it as a VM.
I've done this before using Qemu and have a specific requirement to do so.
Br,
JL
Having had a quick look at https://www.kernel.org/doc/html/v5.10/admin-guide/kernel-parameters.html i did not find kvm=off should have impact on kvm, it is most likely an nvidia specific boot parameter. If you do not experience performance issues and passthrough works, stick to it.
just tested with a different win10pro VM, same problem, changin the CPU results in a boot loop, changing it back to kvm64 fixes it. Same for both.
pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve)
There is one ms windows vm which behaves quite peculiar. Changing the CPU away from anything kvm64 (> host) results in an inability to boot. Even a reboot into safe mode use msconfig does not work as per usual.
Despite having configured a windows10 .iso to boot from the 'press any key to boot...
dear,
Running an IDS VM i realised i only see broadcast and such. What is the preferrable way to create SPAN ports so the IDS VM can monitor all traffic on the virtual networks and physical interfaces ?
I assume to monitor all traffic on the physical interfaces this is easy but VM to VM...
sigh, me and me
zpool add /dev/disk/by-id/nvme...... rpool
did the job, i remember my intent was to create a separate pool but now i welcome the storage
Ehr. Yes, indeed. There exists the nvme device i cannot find back elsewhere.
It has partition assigned to them which i assume were made by ZFS.
cfdisk /dev/nvmen2p1
actually shows it was assigned a label and ZFS partitions
zpool status show the 3 disks i already have in place, not the newly added disk.
errors: No known data errors
zpool import shows two pools i don't know but again nothing out of the ordinary
i can see the nvme disk in the proxmox webui, but i cannot find it back anywhere else
When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS.
When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool.
please assist
yeah, it went away and came back again. This truely a mess and left unaddressed. Somehow nobody knows where these messages come from, how to stop them.
This is in the back of my mind actually. Linux is infamous for not providing user sensible memory reporting. To consume all possible RAM is 'by design' on unix systems, however, to report it is a different matter. Linux systems free, VIRT, MEM%, buffers, cache are essentially used all the time...
Interesting. I only see that temporarily when i restart all VM.
Things i will try in the future is to
stop some non-essential VMs which i suspect may play a role in this behavior
disable the ballooning service in the VM if it is running without ballooning enabled
disable KMS
learn more about...
Hi,
Yes, this looks very similar to what i'm observing.
Reasoning is ZFS reserves half of available RAM, looking at the memory consumption i have not found such to be so. Also, in your cases that would put the RAM consumed at between 57-58GB usage.
What i'm considering is KSM may actually be...
To my understandig this is all because of configuration issues. You may have set a default which now negatively affects performance.
Also, this is exactly what kvm=off means. Literally ever single instruction is emulated now, which is about as fast as it can go without kvm.
the "issue" is back, no machine is running with ballooning enabled. edit: Note the server was not restarted, just all the VM, as such ZFS memory consumption is possibly not part of the growing memory consumption.
Afaik it is not because of the memory reservation reported in MEM but that...
Not my impression. It showed 10GB KSM with 67% reported use as well. For reasons beyond me, while updating the forum the reported memory rose to 79%. The total of VM assigned account for much less than 93% of RAM, roughly 47GB of RAM is configured for all VM combined. Note i have stopped a 4GB...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.