Search results

  1. M

    Windows 10 guest stutters during SMB file copy

    Sorry - I ought to know better! Here's the guest config: agent: 1 balloon: 0 bios: ovmf boot: order=scsi0;net0 cores: 16 cpu: host efidisk0: local-lvm:vm-110-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M hostpci0: 0000:01:00,pcie=1,rombar=0,x-vga=1 hostpci1: 0000:15:00.0,pcie=1,rombar=0...
  2. M

    Windows 10 guest stutters during SMB file copy

    Quick update to this - while running the SMB copy, I see "Hardware Interrupts and DPCs" go from 0 to 8% (total) CPU, which seems to account for the CPU core being pegged. Wondering if that's something to do with QEMU or a native Windows 10 problem.
  3. M

    Windows 10 guest stutters during SMB file copy

    I have a strange Windows 10 guest issue. I don’t have any native Windows systems to just try it. When I copy a 20GB file over SMB, Windows 10 stutters - not just the mouse pointer, but animations in a running browser (for instance). I tried copying the file locally, while also running iPerf...
  4. M

    Added a new node but "local-lvm" storage unknown - due to different volume group name?

    It seems odd to be forced to move root and swap off the “PVE” VG, just to be able to have “local-lvm” on a different SSD. So I’d be trading one quirk for another. What’s the recommended way of doing this? I can’t believe I’m the only one that would want the Proxmox root on its own disk, but...
  5. M

    Added a new node but "local-lvm" storage unknown - due to different volume group name?

    Thanks again - but wouldn't that just then move the problem to the `local` storage?
  6. M

    Added a new node but "local-lvm" storage unknown - due to different volume group name?

    Thanks - but here's the issue: vgs VG #PV #LV #SN Attr VSize VFree data 1 1 0 wz--n- <3.64t 376.00m pve 1 2 0 wz--n- <118.24g <70.68g I already have a volume group called "pve" - and as I suspected, I can't renamed the 'data' one: lvm vgrename...
  7. M

    Added a new node but "local-lvm" storage unknown - due to different volume group name?

    Thanks for your help - but I'd like to do the opposite, and make the storage on this new node match the others. Is that not possible from a logical level, if the underlying storage is instead on a different SSD to the pve/lvm volume?
  8. M

    Added a new node but "local-lvm" storage unknown - due to different volume group name?

    root@pve-pc:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content vztmpl,iso,backup lvmthin: local-lvm thinpool data vgname pve content rootdir,images pbs: pbs datastore backups server 192.168.0.3 content backup fingerprint ... prune-backups...
  9. M

    Added a new node but "local-lvm" storage unknown - due to different volume group name?

    To take a step back - how do I get it to behave the same as my other nodes - where I just went with the default of having both LVM and LVM-thin on the same disk? Storage as the cluster level seems to want the volume group to be call "pve" everwhere? But what about the situation where the node...
  10. M

    Added a new node but "local-lvm" storage unknown - due to different volume group name?

    it says "no such logical volume pve/local-lvm". Maybe it's just because the thin pool is called "data"? But that happened when I created the thin pool.
  11. M

    Added a new node but "local-lvm" storage unknown - due to different volume group name?

    I just added a new node to my cluster. Usually, I have both the LVM and LVM-thin storage on the same SSD. On my new node I wanted to have a one small SSD for just proxmox root, and a second for the VM data. So what I've ended up with is: nvme0n1 259:0 0 3.6T 0 disk...
  12. M

    How to share a single SSD for both VMs and plain "directory" storage

    Sorry, confused now. You're saying I can't create an ext4 file system on a thin volume? I was proposing to do this (on the default 'data' thin pool Proxmox sets up): lvcreate --type thin --name myvol --virtualsize 100G pve/data mkfs.ext4 /dev/myvg/myvol mount /dev/myvg/myvol /mnt/myvol My...
  13. M

    How to share a single SSD for both VMs and plain "directory" storage

    Thanks I'm not looking to move the VMs/LXCs away from LVM-thin storage. What's confusing me is how I can have both that, as well as some raw ext4 space (on the same SSD) that I don't have to declare as only a certain size? Unless the solution is simply to add another LV manually (on the...
  14. M

    How to share a single SSD for both VMs and plain "directory" storage

    I have a 4TB disk I'd like to use for VMs/LXCs, as well as just local storage (mostly to mount to the LXCs - as a form of shared storage). Ideally I don't want to have to arbitrarily partition the drive, just dynamically allow either the VMs/LXCs use as much as they need, and the rest to be...
  15. M

    [SOLVED] No console output after "initialising ramdisk" after upgrade from 8.0 to 8.1

    Sorry I didn't get around to this - it seems to have been fixed with the latest update.
  16. M

    [SOLVED] No console output after "initialising ramdisk" after upgrade from 8.0 to 8.1

    Thanks - I've posted on the thread in the second one you linked https://forum.proxmox.com/threads/opt-in-linux-6-5-kernel-with-zfs-2-2-for-proxmox-ve-8-available-on-test-no-subscription.135635/page-11#post-611280 Seems like it's not resolved, but not a massive issue for me as I don't really...
  17. M

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    When you say "stuck", can you still SSH into the box, and access the web gui? I initially thought it was hanging, but realised it was just the console. I've browsed through the suggestions in this thread, but looks like there's no solution yet?
  18. M

    [SOLVED] No console output after "initialising ramdisk" after upgrade from 8.0 to 8.1

    After upgrading from 8.0 to 8.1, I only got a couple of log lines in the console after rebooting. It stops at the message about initialising ramdisk. I therefore assumed it had hung, rebooted fine in a 6.2 kernel, and checked previous boot logs, only to find no errors. I then rebooted with the...
  19. M

    Two seemingly identical Windows 10 VMs. 20% vs 2% host CPU while VM idle

    Interesting idea - I'll try that and report back. Thanks!
  20. M

    Two seemingly identical Windows 10 VMs. 20% vs 2% host CPU while VM idle

    Thanks for response. 1. The rombar differences were kind of random - neither actually needs it (08:00 is a UEFI GPU, 0a:00.3 is a UBS controller). In case somehow it did make a difference, I just tried, and I get the same results 2. Yes I specifically tried turning off the tablet pointer on...