Search results

  1. gurubert

    Creating storage accessible over SMB

    Mount the TrueNAS export as NFS or SMB share in the CCTV VM.
  2. gurubert

    [SOLVED] NFS server in LXC

    If you have to install the nfs-kernel-server on the host why even bother to add a container?
  3. gurubert

    Advice on New Storage Type

    You have added a RAW image as virtual disk to the VM. A raw image does not support snapshots or thin provisioning. Add a qcow2 image in the directory storage to the VM and it will be thin provisioned and support snapshots. https://pve.proxmox.com/wiki/Storage
  4. gurubert

    Ceph Reef for Windows x64 Install Leads to Automatic Repair Mode

    Have you read https://docs.ceph.com/en/latest/install/windows-install/ ? I have no experience with Ceph on Windows but it looks like it is limited to the server editions. At least it needs dokany if you want to use CephFS.
  5. gurubert

    Renaming storage on a node in the cluster

    Stop all VMs and CTs on pve03. Edit /etc/pve/storage.cfg and rename the storage. Edit all VM and CT config files below /etc/pve/nodes and change the storage name. Start the VMs and CTs on pve03.
  6. gurubert

    LXC Backup fails after Ceph Public Network Change

    Is this CT still locked? Is there a small lock icon visible? Maybe a previous backup was interrupted and could not unlock the CT.
  7. gurubert

    LXC Backup fails after Ceph Public Network Change

    Have you moved the MONs to new IP addresses? Is the Ceph cluster working? What does ceph -s tell you? Have you changed the MON IPs in the ceph.conf of the PBS host?
  8. gurubert

    Proper way to map ESXi network configs to Proxmox

    You can create an SDN with a VLAN zone where you can name each VLAN. Within the VM's configuration the named VLAN can then be selected for the vNIC. With the permission system of Proxmox you can control who is able to use which VLANs for their VMs.
  9. gurubert

    Deploying multiple proxmox host easily?

    I would also assume that automating a Debian installation and then adding the Proxmox repos and packages would be the easiest way. Have you looked at https://fai-project.org/ ?
  10. gurubert

    Fragen zu einem Ceph Cluster Setup

    2x 25G sind aufgrund der geringeren Latenzen besser als 2x 40G. Für NVMe braucht es keinen HBA-Controller, da diese ja direkt am PCI-Bus hängen.
  11. gurubert

    Two Ceph pool with seperated hard disks in each node

    The command line tool "ceph" is available to create crush rules and set device classes.
  12. gurubert

    Two Ceph pool with seperated hard disks in each node

    Assign a different device class to the "B" disks and create crush rules that utilizes these device classes. Assign the crush rules to the pools and they will only use the OSDs from their device class.
  13. gurubert

    [SOLVED] How to enable tagged vlan1 for VM by default?

    Use another VLAN ID as PVID for that bridge. We use 4063 for that purpose: auto vmbr0 iface vmbr0 inet static bridge-ports bond0 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 1-3 5-99 101-4094 bridge-pvid 4063
  14. gurubert

    Ceph PG error

    IMHO 80GB for WAL is way too much. It should be sufficient to have one extra 80GB of SSD space for HDD OSDs that then will contain DB and WAL. OSDs will automatically put WAL and DB together if you only specify one db-device on creation.
  15. gurubert

    Ceph PG error

    The MGR needs that pool and will create it, I forgot. :)
  16. gurubert

    Ceph PG error

    The .mgr pool is the one with the missing PG, you need to remove it and recreate it with the same name. The MGR stores information in that pool.
  17. gurubert

    Ceph PG error

    Yes, this should not affect the other pool.
  18. gurubert

    Ceph PG error

    You seem to have removed the OSDs too fast and the pool was not able to migrate its PG to other OSDs. And now it cannot find the PG any more. You can try to remove the pool and recreate it. It is called ".mgr" with a dot in front.
  19. gurubert

    Ceph PG error

    What happend to the pool with the id 1 (usually the pool called ".mgr")? Its only placement group is in state "unknown" which is not good. Please try to restart OSDs 2, 3 and 7 with "systemctl restart ceph-osd@2.service" etc on their nodes. If that does not fix the issue please post the...