Search results

  1. gurubert

    Proxmox Ceph multiple roots that has same host item but different weights

    The weight of the buckets is always the sum of their contained items. Why would you want to change that? All pools always use all OSDs (except when you use device classes). This makes Ceph very flexible wrt to disk usage. If you want the second pool to use its "own" OSDs you need to assign a...
  2. gurubert

    Proxmox Ceph multiple roots that has same host item but different weights

    Just put two buckets of type room (or datacenter if you like) under the root and the hosts under the room buckets. The second rule would then take the room bucket for site A as starting point. No need for two root buckets.
  3. gurubert

    Creating storage accessible over SMB

    Mount the TrueNAS export as NFS or SMB share in the CCTV VM.
  4. gurubert

    [SOLVED] NFS server in LXC

    If you have to install the nfs-kernel-server on the host why even bother to add a container?
  5. gurubert

    Advice on New Storage Type

    You have added a RAW image as virtual disk to the VM. A raw image does not support snapshots or thin provisioning. Add a qcow2 image in the directory storage to the VM and it will be thin provisioned and support snapshots. https://pve.proxmox.com/wiki/Storage
  6. gurubert

    Ceph Reef for Windows x64 Install Leads to Automatic Repair Mode

    Have you read https://docs.ceph.com/en/latest/install/windows-install/ ? I have no experience with Ceph on Windows but it looks like it is limited to the server editions. At least it needs dokany if you want to use CephFS.
  7. gurubert

    Renaming storage on a node in the cluster

    Stop all VMs and CTs on pve03. Edit /etc/pve/storage.cfg and rename the storage. Edit all VM and CT config files below /etc/pve/nodes and change the storage name. Start the VMs and CTs on pve03.
  8. gurubert

    LXC Backup fails after Ceph Public Network Change

    Is this CT still locked? Is there a small lock icon visible? Maybe a previous backup was interrupted and could not unlock the CT.
  9. gurubert

    LXC Backup fails after Ceph Public Network Change

    Have you moved the MONs to new IP addresses? Is the Ceph cluster working? What does ceph -s tell you? Have you changed the MON IPs in the ceph.conf of the PBS host?
  10. gurubert

    Proper way to map ESXi network configs to Proxmox

    You can create an SDN with a VLAN zone where you can name each VLAN. Within the VM's configuration the named VLAN can then be selected for the vNIC. With the permission system of Proxmox you can control who is able to use which VLANs for their VMs.
  11. gurubert

    Deploying multiple proxmox host easily?

    I would also assume that automating a Debian installation and then adding the Proxmox repos and packages would be the easiest way. Have you looked at https://fai-project.org/ ?
  12. gurubert

    Fragen zu einem Ceph Cluster Setup

    2x 25G sind aufgrund der geringeren Latenzen besser als 2x 40G. Für NVMe braucht es keinen HBA-Controller, da diese ja direkt am PCI-Bus hängen.
  13. gurubert

    Two Ceph pool with seperated hard disks in each node

    The command line tool "ceph" is available to create crush rules and set device classes.
  14. gurubert

    Two Ceph pool with seperated hard disks in each node

    Assign a different device class to the "B" disks and create crush rules that utilizes these device classes. Assign the crush rules to the pools and they will only use the OSDs from their device class.
  15. gurubert

    [SOLVED] How to enable tagged vlan1 for VM by default?

    Use another VLAN ID as PVID for that bridge. We use 4063 for that purpose: auto vmbr0 iface vmbr0 inet static bridge-ports bond0 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 1-3 5-99 101-4094 bridge-pvid 4063
  16. gurubert

    Ceph PG error

    IMHO 80GB for WAL is way too much. It should be sufficient to have one extra 80GB of SSD space for HDD OSDs that then will contain DB and WAL. OSDs will automatically put WAL and DB together if you only specify one db-device on creation.
  17. gurubert

    Ceph PG error

    The MGR needs that pool and will create it, I forgot. :)
  18. gurubert

    Ceph PG error

    The .mgr pool is the one with the missing PG, you need to remove it and recreate it with the same name. The MGR stores information in that pool.
  19. gurubert

    Ceph PG error

    Yes, this should not affect the other pool.