the ZFS message sounds like a red-herring. But the ext4 journal messages look somewhat more problematic.
Can you log in to the host? Either via SSH (which would be nicer, as you could copy&paste output) or directly on the screen from which you...
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#user_mgmt
You will need to give the user access to the resources they need. So if they should be able to edit virtual disks, or which ISO is used, access to the storages. For networks, you...
If you have a pull request for Proxmox please be so kind to link it here so i can review/improve it before there is a chance Proxmox will merge it.
I'd add the option into the CEPH pool configuration UI because its linked on a per-pool-basis and...
please look at the explanation at https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher and what has been discussed here. The behavior is as expected.
definitiv nicht getestet!
Was spricht dagegen den jetzigen Cluster Node für Node upzugraden? Zuwenig Platz auf den anderen Nodes um eine Freizuräumen?
Alternativ könnte man vlt auch ganz einen anderen Ansatz nehmen: PVE 7.4 sollte die...
For stretch PVE + Ceph clusters we recommend a full PVE install for the tie-breaker node. See the newly published guide: https://pve.proxmox.com/wiki/Stretch_Cluster
With a 3-node Ceph cluster you need to be careful when planning how many disks you add for OSDs. More but smaller is preferred, because if just a single disk fails, Ceph can only recover to the same node in such a small cluster.
For example, if...
Wenn ich nichts falsch verstanden habe: Da die linked clones nur über die Disk Images mit dem Template verknüpft sind, könntest du die Disks der VMs (temporär) auf ein anderes Storage schieben (Disk Action -> Move Disk). Dadurch wird der gesamte...
Grundsätzlich empfiehlt es sich die VirtIO Geräte zu nehmen. Für Disks SCSI mit VirtIO-SCSI-single als Controller, oder den Virtio-Block. Bei NICs die VirtIO. Da muss dann kein ganzes Gerät emuliert werden für den Gast und die Performance sollte...
The API approach (e.g. with pvesh) is probably the best approach. If you are unsure which calls are of interest, do the procedure via the web UI and take a look at the browser's developer tools, especially the network tab, to see which API calls...
My guess is, since host mem usage and mem usage show the same value, that the VM does not report back any detailed information. You can check that in the Monitor submenu of the VM and run info balloon there. Compare the output from a VM that...
VLAN tag 1 is usually the default untagged VLAN. As in, those packets won't get a VLAN tag added when they leave the physical interface.
But whenever you assign one of the SDN VNETs to a guets virtual NIC, any packet leaving the host should get...
Having multiple vmbr interfaces with the same physical bridge port doesn't sound like a good idea. I would definitely recommend that you set up a SDN VLAN zone with vmbr0 as the base bridge for it and go from there for all the VLANs that should...
VLANs can be tricky to debug. If your switch supports it, give it an IP in the VLAN, then you can check if the connection to the switch works.
For guests, by now I recommend that you use the SDN VLAN zone. It is one easy place to have every VLAN...