Recent content by l.ansaloni

  1. L

    microk8s connect-external-ceph error

    It worked! Thanks so much for the advice! I deleted the client.healthchecker user from ceph with the command ceph auth rm client.healthchecker and the command terminated successfully (I just added the "-E" option for another previously reported problem): $ sudo -E microk8s...
  2. L

    microk8s connect-external-ceph error

    I'm sorry but I'm not very expert on microk8s and I can't find any other debug option $ /snap/microk8s/6089/usr/bin/python3 /var/snap/microk8s/common/plugins/.rook-create-external-cluster-resources.py --format=bash --rbd-data-pool-name=microk8s-rbd --ceph-conf=ceph.conf...
  3. L

    microk8s connect-external-ceph error

    I deleted the microk8s-rbd pool from Proxmox and from microk8s I deleted the rook-ceph-external namespace but it still gives an error: $ sudo microk8s connect-external-ceph --ceph-conf ceph.conf --keyring ceph.client.admin.keyring --rbd-pool microk8s-rbd Attempting to connect to Ceph cluster...
  4. L

    microk8s connect-external-ceph error

    Hi all, I have a Proxmox cluster with 3 nodes and Ceph storage, I installed version 1.28/stable of microk8s on 3 VMs with the command: sudo snap install microk8s --classic --channel=1.28/stable I would like to use Proxmox Ceph cluster as shared storage for microk8s cluster and so I enabled...
  5. L

    VirtIO = task xxx blocked for more than 120 seconds.

    I configured the VMs like this: Device: SCSI SCSI Controller: "VirtIO SCSI Single" Cache: Write back Async IO: threads Discard and SSD emulation flagged IO thread: unflagged Stop/Start the VMs and running the fstrim -a command... let's see if the system is stable.
  6. L

    VirtIO = task xxx blocked for more than 120 seconds.

    I disabled iothread on all disks: I'll update you in about ten days if the situation becomes stable.
  7. L

    VirtIO = task xxx blocked for more than 120 seconds.

    Hi, I thought I had solved it with your configuration advice but after 8 days a VM crashed again. These are the messages I find in the system logs: Jan 30 06:00:00 docker-cluster-101 qemu-ga: info: guest-ping called ... ... Jan 30 06:11:45 docker-cluster-101 kernel: [649222.434886] INFO: task...
  8. L

    VirtIO = task xxx blocked for more than 120 seconds.

    Hi, I also have a problem with 3 VMs with 64GB of RAM with Ubuntu 22.04 LTS operating system and kernel 5.15.0-91-generic. Proxmox VE installed on 3 nodes with 512GB of ram and version pve-manager/8.1.3/b46aac3b42da5d15 (running kernel: 6.5.11-7-pve). You write that the settings of...
  9. L

    cephfs clients failing to respond to capability release

    Good morning, an update on my problem that I managed to solve. I isolated the pve cluster traffic with one VLAN and the docker cluster traffic with a second VLAN. For a few months the cephfs storage has no longer had any problems and has excellent performance.
  10. L

    One backup at a time in the cluster

    Hi, I use these versions: Proxmox Backup Server 2.2-6 Proxmox Virtual Environment 7.2-11/b76d3178 (running kernel: 5.15.53-1-pve) I have a 3 node proxmox clustert with ceph storage and I backup the VMs with PBS using a network share from a NAS with 10Gbps connectivity as a datastore. If I...
  11. L

    [SOLVED] DataStore content is empty after moving it to a new server

    I am using a QNAP NAS and I needed to replace the disks, it went through this procedure: copied the datastore to a USB disk with NTFS filesystem, I think the original filesystem of the QNAP NAS is ext4 replaced the NAS disks and copied all the datastore from the USB disk to the new NAS share...
  12. L

    [SOLVED] DataStore content is empty after moving it to a new server

    I also did not see the old backups after replacing the disk and copying the old data. The problem was due to an incorrect encoding of the filenames that had unknown characters: after renaming all the folders like /repository_path/vm/XX/folder_bad_character old backups have appeared.
  13. L

    Garbage Collecor - TASK ERROR: unexpected error on datastore traversal: Not a directory (os error 20)

    I solved the problem. In the same folder as the backup there was another subfolder that had nothing to do with the backup, removed that subfolder the garbege collector started regularly.
  14. L

    cephfs clients failing to respond to capability release

    Hi there I have 3 Proxmox nodes Supermicro SYS-120C-TN10R connected via Mellanox 100GbE ConnectX-6 Dx cards in cross-connect mode using MCP1600-C00AE30N DAC Cable, no switch. I followed the guide: Full Mesh Network for Ceph Server and in particular I used Open vSwitch to configure the network...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!