Search results

  1. gurubert

    Problem with PVE 8.1 and OCFS2 shared storage with io_uring

    `virtio` may be something completely different. Please switch to `scsi` and the `virtio-scsi-single` virtual HBA in the VM.
  2. gurubert

    New install. Proxmox + Ceph cluster config questions.

    You could add a standby-replay MDS instance that will be active faster than a cold standby. Each CephFS needs at least one active MDS. With 8 CephFS you would need 8 MDS instances. Do you have the capacities to run these? Multiple MDS per CephFS are needed when the load is too high for just...
  3. gurubert

    Cluster A and ClusterB same management network subnet

    The VM config does not reside in the VM storage directory like it is with VMware. You would need to copy or move the NNN.conf file from /etc/pve of one cluster to the other. And make sure there is no ID conflict and the storages and network bridges are all named the same. Otherwise you would...
  4. gurubert

    [SOLVED] Switch redundancy for cluster network?

    You can have multiple cluster network links between the Proxmox nodes. This allows e.g. to have two 1G NICs per node and two "cheap" separate 1G switches for the cluster networks.
  5. gurubert

    Unexpected Ceph Outage - What did I do wrong?

    This is not expected. Is there any IO going on after you physically removed the drive? The OSD orocess will only die after it cannot access the drive which it will only try to do with IO. As for the complete failure there is way too less information to debug this. How.many.MONs did you deploy?
  6. gurubert

    Access Proxmox managed Ceph Pool from standalone node

    Ceph is either IPv6 or IPv4, but not dual stack AFAIK. Does your standalone node has a layer 2 connection in fc00::/64? If not you need to route the traffic. Which is OK for a Ceph client.
  7. gurubert

    Backing up bare metal machine in network

    I don't think that your first method "running the backup from the PBS machine/VM connecting through the network" is even implemented in PBS. To backup physical machines you need to run the PBS client on them. https://pbs.proxmox.com/docs/backup-client.html
  8. gurubert

    1 ceph or 2 ceph clusters for fast and slow pools?

    I do not think that this will be necessary as the IOPS provided by the NVMe drives will make these OSDs grab CPU and memory resources. HDD OSD processes usually sit and wait for IO from the HDDs.
  9. gurubert

    1 ceph or 2 ceph clusters for fast and slow pools?

    Don't run Ceph in VMs. Just use the hardware in one cluster. As the clients exchange data directly with the OSDs there is no bottlebeck where both fast and slow drives would clash (except for the NICs, but these would also be shared in your scenario).
  10. gurubert

    Proxmox Ceph multiple roots that has same host item but different weights

    The weight of the buckets is always the sum of their contained items. Why would you want to change that? All pools always use all OSDs (except when you use device classes). This makes Ceph very flexible wrt to disk usage. If you want the second pool to use its "own" OSDs you need to assign a...
  11. gurubert

    Proxmox Ceph multiple roots that has same host item but different weights

    Just put two buckets of type room (or datacenter if you like) under the root and the hosts under the room buckets. The second rule would then take the room bucket for site A as starting point. No need for two root buckets.
  12. gurubert

    Creating storage accessible over SMB

    Mount the TrueNAS export as NFS or SMB share in the CCTV VM.
  13. gurubert

    [SOLVED] NFS server in LXC

    If you have to install the nfs-kernel-server on the host why even bother to add a container?
  14. gurubert

    Advice on New Storage Type

    You have added a RAW image as virtual disk to the VM. A raw image does not support snapshots or thin provisioning. Add a qcow2 image in the directory storage to the VM and it will be thin provisioned and support snapshots. https://pve.proxmox.com/wiki/Storage
  15. gurubert

    Ceph Reef for Windows x64 Install Leads to Automatic Repair Mode

    Have you read https://docs.ceph.com/en/latest/install/windows-install/ ? I have no experience with Ceph on Windows but it looks like it is limited to the server editions. At least it needs dokany if you want to use CephFS.
  16. gurubert

    Renaming storage on a node in the cluster

    Stop all VMs and CTs on pve03. Edit /etc/pve/storage.cfg and rename the storage. Edit all VM and CT config files below /etc/pve/nodes and change the storage name. Start the VMs and CTs on pve03.
  17. gurubert

    LXC Backup fails after Ceph Public Network Change

    Is this CT still locked? Is there a small lock icon visible? Maybe a previous backup was interrupted and could not unlock the CT.
  18. gurubert

    LXC Backup fails after Ceph Public Network Change

    Have you moved the MONs to new IP addresses? Is the Ceph cluster working? What does ceph -s tell you? Have you changed the MON IPs in the ceph.conf of the PBS host?
  19. gurubert

    Proper way to map ESXi network configs to Proxmox

    You can create an SDN with a VLAN zone where you can name each VLAN. Within the VM's configuration the named VLAN can then be selected for the vNIC. With the permission system of Proxmox you can control who is able to use which VLANs for their VMs.