Search results

  1. J

    [SOLVED] destroying LUKS thru configuration

    Though i do not know how to fix this, the data is intact. ZFS disk for zfs are found under /dev/zvol/<poolname>/data/... these are actually symbolic links to /dev/<devicename> look for the vm-<vmid>-<partid> and use `ls -l vm-<vmid>-<partid>*` to identify the actual device name under...
  2. J

    [SOLVED] destroying LUKS thru configuration

    Hey I think i've outdone myself here. For an encrypted VM which had uncertain memory requirements i chose to work with changing CPU configuration, memory ballooning and hotplug, enabling 1GB pages for the CPU, NUMA Despit multiple reboots in multiple configurations this now fails to recognise...
  3. J

    Partitioning on install

    Thanks. Though i love its featureset, I became hesitant of zfs as it consumes a lot of memory for the small scale VM servers i build. I hope the people who distribute the Proxmox ISO take a moment to enable partitioning sooner or later, it is needlessly aggressive to claim entire disks. This...
  4. J

    Partitioning on install

    Is there a way to not wipe the entire disk on installation ? I seek to preserve a pre installed microsoft windows partition and boot it as a VM. I've done this before using Qemu and have a specific requirement to do so. Br, JL
  5. J

    Nvidia PCIE Passthrough on Ubuntu VM returning "Unable to determine the device handle for GPU 0000:01:00.0: Unknown Error"

    Having had a quick look at https://www.kernel.org/doc/html/v5.10/admin-guide/kernel-parameters.html i did not find kvm=off should have impact on kvm, it is most likely an nvidia specific boot parameter. If you do not experience performance issues and passthrough works, stick to it.
  6. J

    MSWX VM refuses to boot or bootloops on cpu change

    just tested with a different win10pro VM, same problem, changin the CPU results in a boot loop, changing it back to kvm64 fixes it. Same for both. pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve)
  7. J

    MSWX VM refuses to boot or bootloops on cpu change

    There is one ms windows vm which behaves quite peculiar. Changing the CPU away from anything kvm64 (> host) results in an inability to boot. Even a reboot into safe mode use msconfig does not work as per usual. Despite having configured a windows10 .iso to boot from the 'press any key to boot...
  8. J

    how to create a tap from physical to IDS VM

    dear, Running an IDS VM i realised i only see broadcast and such. What is the preferrable way to create SPAN ports so the IDS VM can monitor all traffic on the virtual networks and physical interfaces ? I assume to monitor all traffic on the physical interfaces this is easy but VM to VM...
  9. J

    ZFS disk device shows but unaible to add to volume

    sigh, me and me zpool add /dev/disk/by-id/nvme...... rpool did the job, i remember my intent was to create a separate pool but now i welcome the storage
  10. J

    ZFS disk device shows but unaible to add to volume

    Ehr. Yes, indeed. There exists the nvme device i cannot find back elsewhere. It has partition assigned to them which i assume were made by ZFS. cfdisk /dev/nvmen2p1 actually shows it was assigned a label and ZFS partitions
  11. J

    ZFS disk device shows but unaible to add to volume

    zpool status show the 3 disks i already have in place, not the newly added disk. errors: No known data errors zpool import shows two pools i don't know but again nothing out of the ordinary i can see the nvme disk in the proxmox webui, but i cannot find it back anywhere else
  12. J

    ZFS disk device shows but unaible to add to volume

    When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS. When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool. please assist
  13. J

    [Solved] RRDC update error

    yeah, it went away and came back again. This truely a mess and left unaddressed. Somehow nobody knows where these messages come from, how to stop them.
  14. J

    Proxmox 6.x consumes more memory than assigned using ZFS

    This is in the back of my mind actually. Linux is infamous for not providing user sensible memory reporting. To consume all possible RAM is 'by design' on unix systems, however, to report it is a different matter. Linux systems free, VIRT, MEM%, buffers, cache are essentially used all the time...
  15. J

    Proxmox 6.x consumes more memory than assigned using ZFS

    Interesting. I only see that temporarily when i restart all VM. Things i will try in the future is to stop some non-essential VMs which i suspect may play a role in this behavior disable the ballooning service in the VM if it is running without ballooning enabled disable KMS learn more about...
  16. J

    Proxmox 6.x consumes more memory than assigned using ZFS

    Hi, Yes, this looks very similar to what i'm observing. Reasoning is ZFS reserves half of available RAM, looking at the memory consumption i have not found such to be so. Also, in your cases that would put the RAM consumed at between 57-58GB usage. What i'm considering is KSM may actually be...
  17. J

    Proxmox 6.2: Navi 5700xt GPU Passthrough to Win10 Guest

    To my understandig this is all because of configuration issues. You may have set a default which now negatively affects performance. Also, this is exactly what kvm=off means. Literally ever single instruction is emulated now, which is about as fast as it can go without kvm.
  18. J

    Proxmox 6.x consumes more memory than assigned using ZFS

    the "issue" is back, no machine is running with ballooning enabled. edit: Note the server was not restarted, just all the VM, as such ZFS memory consumption is possibly not part of the growing memory consumption. Afaik it is not because of the memory reservation reported in MEM but that...
  19. J

    Proxmox 6.x consumes more memory than assigned using ZFS

    Not my impression. It showed 10GB KSM with 67% reported use as well. For reasons beyond me, while updating the forum the reported memory rose to 79%. The total of VM assigned account for much less than 93% of RAM, roughly 47GB of RAM is configured for all VM combined. Note i have stopped a 4GB...
  20. J

    Proxmox 6.x consumes more memory than assigned using ZFS

    obvioiusly, as we spoke it had to go near 80%

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!