Search results

  1. aaron

    [SOLVED] Proxmox Cluster 3 nodes, Monitors refuse to start

    Hmm, the create logs look okay. Yep. You need a quorum of available MONs for the Ceph cluster to work. So usually 2 of 3.
  2. aaron

    Title: [Fix] Proxmox ACME Aliyun Plugin Error "SignatureDoesNotMatch" (Error add txt for domain)

    Thanks for reporting this. But the forum is not the correct place. Please open a new bug report in our bugtracker https://bugzilla.proxmox.com/ where we can keep better track of this.
  3. aaron

    CEPH stretched cluster : PVE partition is not the same as the CEPH partition !

    This is a situation that a stretch cluster does not protect you against. The main goal of a stretch cluster is to keep the cluster functional if one location is completely down, for example, due to a fire. The network between the locations needs to be set up as reliably as possible. How that is...
  4. aaron

    [SOLVED] Proxmox Cluster 3 nodes, Monitors refuse to start

    You mean in the ceph.conf file? that is no problem, as the /24 defines the subnet, so the last octet does not matter. What is interesting is that, according to the ceph -s and the config file, only one MON is known to the running Ceph cluster. The other MONs might be shown in the Proxmox VE UI...
  5. aaron

    [SOLVED] Proxmox Cluster 3 nodes, Monitors refuse to start

    Whats the output of ceph -s cat /etc/pve/ceph.conf Please paste the output within tags or use the formatting buttons of the editor </>.
  6. aaron

    Disks readonly after upgrade to PBS4 [total defect]

    the ZFS message sounds like a red-herring. But the ext4 journal messages look somewhat more problematic. Can you log in to the host? Either via SSH (which would be nicer, as you could copy&paste output) or directly on the screen from which you made the screenshots? The outputs of lsblk mount...
  7. aaron

    Delegated administration for one VM only possible?

    https://pve.proxmox.com/pve-docs/pve-admin-guide.html#user_mgmt You will need to give the user access to the resources they need. So if they should be able to edit virtual disks, or which ISO is used, access to the storages. For networks, you can give permissions on full zones or individual...
  8. aaron

    pve 9 "memory on pfsense ? "

    please look at the explanation at https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher and what has been discussed here. The behavior is as expected.
  9. aaron

    Cluster mit PVE 7.4 und PVE 9

    definitiv nicht getestet! Was spricht dagegen den jetzigen Cluster Node für Node upzugraden? Zuwenig Platz auf den anderen Nodes um eine Freizuräumen? Alternativ könnte man vlt auch ganz einen anderen Ansatz nehmen: PVE 7.4 sollte die remote-migration für VMs schon können. 1. Neue PVE 9 Node...
  10. aaron

    3rd Ceph MON on external QDevice (Podman) - 4-node / 2-site cluster

    For stretch PVE + Ceph clusters we recommend a full PVE install for the tie-breaker node. See the newly published guide: https://pve.proxmox.com/wiki/Stretch_Cluster
  11. aaron

    Linked Clone in Full Clone umwandeln – möglichst ohne Downtime

    Ich glaube nicht, dass wir ein Move Disk auf das gleiche Storage aktuell zulassen.
  12. aaron

    Proxmox HA cluster with Ceph

    With a 3-node Ceph cluster you need to be careful when planning how many disks you add for OSDs. More but smaller is preferred, because if just a single disk fails, Ceph can only recover to the same node in such a small cluster. For example, if you use only 2 large OSDs per node, and one of them...
  13. aaron

    Linked Clone in Full Clone umwandeln – möglichst ohne Downtime

    Wenn ich nichts falsch verstanden habe: Da die linked clones nur über die Disk Images mit dem Template verknüpft sind, könntest du die Disks der VMs (temporär) auf ein anderes Storage schieben (Disk Action -> Move Disk). Dadurch wird der gesamte Inhalt der Disk in das neue Image auf dem anderen...
  14. aaron

    [SOLVED] Snapshots as volume chains problem

    Please. Threads in the forum can easily be missed with how much is going on.
  15. aaron

    ist der virtuelle Proxmox SATA III schneller als 6 Gbit/s

    Grundsätzlich empfiehlt es sich die VirtIO Geräte zu nehmen. Für Disks SCSI mit VirtIO-SCSI-single als Controller, oder den Virtio-Block. Bei NICs die VirtIO. Da muss dann kein ganzes Gerät emuliert werden für den Gast und die Performance sollte somit besser sein.
  16. aaron

    [SOLVED] List ESXi disks? (or where is source for qm import?)

    The API approach (e.g. with pvesh) is probably the best approach. If you are unsure which calls are of interest, do the procedure via the web UI and take a look at the browser's developer tools, especially the network tab, to see which API calls are involved in each step.
  17. aaron

    VM Memory Usage Shows 102% After Upgrading to PVE 9

    My guess is, since host mem usage and mem usage show the same value, that the VM does not report back any detailed information. You can check that in the Monitor submenu of the VM and run info balloon there. Compare the output from a VM that reports it correctly to see the difference in infos...
  18. aaron

    Proper or best practice way to set-up VLANs on single NIC?

    VLAN tag 1 is usually the default untagged VLAN. As in, those packets won't get a VLAN tag added when they leave the physical interface. But whenever you assign one of the SDN VNETs to a guets virtual NIC, any packet leaving the host should get the set VLAN tag assigned. One can check that with...
  19. aaron

    Proper or best practice way to set-up VLANs on single NIC?

    looks better :) the subnets are useful to define as the info there will pre-populate firewall aliases. you then need to apply it on the main SDN panel
  20. aaron

    Proper or best practice way to set-up VLANs on single NIC?

    Having multiple vmbr interfaces with the same physical bridge port doesn't sound like a good idea. I would definitely recommend that you set up a SDN VLAN zone with vmbr0 as the base bridge for it and go from there for all the VLANs that should be accessible by the guests.