Siehe "CPU Types" und "Custom CPU Types" in unserer Doku: https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu
Der "CPU Type" bestimmt, welche CPU emuliert wird, sprich welche flags dem Gast "durchgereicht" oder vorgegaukelt werden. Für SSE4 reicht einfach eine etwas neuere AMD oder Intel...
Don't know what you mean with that... The location of your disk image (of the VM) will depend on your storage. If you're using ZFS it will be similar to /dev/zvol/pool/vm-<vmid>-disk-0, if you're using LVM then /dev/mapper/pool-vm--<vmid>--disk--0 etc... That's the file you need to 'dd' to the...
I've sent a patch to the mailing list, should be out with the next update or so: https://lists.proxmox.com/pipermail/pve-devel/2021-October/050393.html
Anyone who can reproduce it, could you try setting all disks to use aio=native and attempt to trigger it again?
That is, edit your config and add the above line at the end of every disk, e.g. from '100.conf.txt' in the post above:
scsi0...
I cannot reproduce the issue you are running into here. Usually we never remove anything from the config file, unless it's an unknown option... could it potentially be that you migrated to a slightly outdated node and then did the "stop" action there? Would be weird, since the TPM worked, but...
I think in this case you'd need your described workaround with multiple bridges. Why do you need to add virtio adapters with 'args' in the first place though?
Sofern auf dem NFS share dann die richtige Ordnerstruktur liegt, ja. PVE erstellt diese beim ersten mount, das image muss dann nach "dump".
Am NFS server natürlich ;) Man bedenke, dass PVE auch schreib-rechte braucht, um die oben genannte struktur anzulegen.
Right, and as I said, if a connection breaks then TCP will retry for a bit until the connection either comes back or times out. A regular unreliable network thus will not fully interrupt your sync. That is standard practice, TCP handles the reliability - we do not retry on the application layer...
Sind die VMs i440fx? Falls ja, dann ist das Problem bekannt und ein fix vorhanden, sollte in Kürze verfügbar werden: https://lists.proxmox.com/pipermail/pve-devel/2021-October/050342.html
Bis dahin als workaround q35 oder efidisk per shell hinzufügen: qm set <vmid> -efidisk0 <storage>:1,efitype=2m
Yes, how else would the password be stored? The fingerprint is public anyway, the password must be stored in a way that can authenticate against the server, so any hashing wouldn't help.
Note that the /priv/ subdirectory is only accessible by the root user. If you're worried about security...
Some misconceptions here, let me try to clear them up:
You're comparing apples to peaches. A qcow2 file does not contain files, it contains a disk image. That is, you can attach it to a VM and the VM will see it as a hard disk, not a file storage. So your "classical EXT4 partition" would...
No, that's the job of TCP (and it generally does that really well). But since you're trying to connect via an SSH tunnel, the error you're seeing emerges from the kernel directly: TCP will retransmit until it receives a response, not until it receives a *successful* response. And since your SSH...
Uhm... why not? Type it in, run it, and post what it says?
Tip for getting help online in general: whenever your tempted to write "doesn't work" without any context, rewrite your sentence with context ;)
What exactly doesn't work? What are your expecting to happen, and what happens? What do...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.