Ok I'm a bit late to this discussion but...
I did that exact same thing of running an NFS server inside an LXC container. The areas on the host PVE system that I wanted to share out were bind mounted into the container.
As mentioned before, in order to not have id remapping getting in the way...
As said previously just use the migration tool in the UI (the Proxmox people know best how to migrate stuff on their system :) ). However if you want to automate it or not change the original VM in any way (migration moves the disk) then yes that command will essentially do it:
qemu-img convert...
Yes both this new Debian 12/VMware Workstation 17 VM and the older working Ubuntu/VMware Player VM have the CPU set to host and the relevant kernel module loaded on the PVE host. The years old VMplayer instance still works in a performant way as expected. Yet the new VMware Workstation 17...
I'm sorry but this is a bit of a repeating theme... I recently tried to run VMware Workstation 17 inside a Debian 12 VM on Proxmox 8 and despite the VTx flag being recognised and another VM running an older version of VMware player working, this setup didn't use the acceleration offered by VTx...
Hi all,
I noticed the above message appearing in the logs occasionally on one of my PVE servers. The VMs in question exhibit no apparent issues, it tends to complain about one disk more often than not. The storage in question is LVM-Thin on an SSD. Frequency ranges from a number of weeks/months...
No not yet. There is that horrible work around of nested PVE/QEMU instances, see above. Basically it seems to work for normal bridged interfaces but not for internal ones. When I get an answer from the kvm/qemu team I'll post back here.
Thank you very much for your reply :-). Interesting. So the reclaiming of disk space is a function solely within LVM and not PVE (beyond passing the TRIM requests through to LVM). For some reason I got it into my head that PVE was passing explicit `return this block to the free pool' requests to...
I simply want to shrink the disk down to free up unused space, returning it to the LVM thin pool. The VM itself runs fine, but at the moment the only way to reclaim the space would be to either do a native back up of the disk, recreate the disk and restore from backup. Or mount the disk in...
Hi all,
I'm usng LVM-Thin storage for my VMs, one of which is a very old legacy Linux VM. So old that the kernel doesn't support TRIM. The file system is EXT2. Is it possible to somehow run fstrim from the PVE host on the lvm-thin volume and have the space reclaimed in the lvm-thin pool? The VM...
<snip>
Thank you so much Fabian for that brilliant explanation. I tried googling around for an answer for ages but apart from the not needed on lv-thin LVs comments scattered about the place I got no further.
Just curious... Why isn't it necessary to set issue_discards=1 when used with PVE and lvm thin provisioning? Surely this is still needed when a thin volume is deleted (as there's always stuff left lying around in the volume etc)?
I had a very similar issue when upgrading from 6.4 through to 8.1. All of my containers didn't display anything on the console but booted up and I could ssh in. The above fix to switch the console mode to /dev/console worked as stated above (and many thanks for the posted solution :)). However I...
Sorry I took a very long time to reply, been moving house, selling the old family home and re-setting up my home lab etc...
Anyway I tried the exact same experiment on PVE version 8.1 and had the exact same issue. However this also occurred on a plain KVM/QEMU install on regular Ubuntu and so...
Update: I have tested this on a `bare' KVM/QEMU setup, no proxmox software involved, and it suffers from exactly the same issue. I shall raise this with the kvm/qemu team.
Many thanks for you help and sorry to take up your time. If I/someone else gets to the bottom of this I'll of course post...
Yup no firewalls in the way. It's weird. Basically when VM running on the same L0 PVE server tries to connect in you can see it sending the initial SYN packet, 2 TCP retransmits and then the originating VM trying to find the mac address for the L2 VM with a couple of ARP requests (which makes...
Hello again,
I have an apology to make, I didn't mention what type the L1 hypervisors were :-(.
I tried the tcpdump command but it didn't show anything odd as ping worked, I just couldn't ssh into it unless it was from another real machine. Likewise ip route and ip neighbor showed nothing...
Firstly sorry about the late reply. Life took over! :-(. BTW many thanks for the tip about L0-2, much simpler!
Anyway I have done some further investigating...
In fact I can successfully ping the L2 VM from any of my networks but, for example, sshing in from a PVE L1 VM doesn't work, nor does...
Ok just to give some feedback in case someone else wants to do this...
It actually works surprisingly well but the system only had one node and wasn't a part of a cluster. However use these instructions at your own risk... You've been warned!
So I actually backed up the system, minus the VM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.