It worked! Thanks so much for the advice!
I deleted the client.healthchecker user from ceph with the command ceph auth rm client.healthchecker and the command terminated successfully (I just added the "-E" option for another previously reported problem):
$ sudo -E microk8s...
I'm sorry but I'm not very expert on microk8s and I can't find any other debug option
$ /snap/microk8s/6089/usr/bin/python3 /var/snap/microk8s/common/plugins/.rook-create-external-cluster-resources.py --format=bash --rbd-data-pool-name=microk8s-rbd --ceph-conf=ceph.conf...
I deleted the microk8s-rbd pool from Proxmox and from microk8s I deleted the rook-ceph-external namespace but it still gives an error:
$ sudo microk8s connect-external-ceph --ceph-conf ceph.conf --keyring ceph.client.admin.keyring --rbd-pool microk8s-rbd
Attempting to connect to Ceph cluster...
Hi all,
I have a Proxmox cluster with 3 nodes and Ceph storage, I installed version 1.28/stable of microk8s on 3 VMs with the command:
sudo snap install microk8s --classic --channel=1.28/stable
I would like to use Proxmox Ceph cluster as shared storage for microk8s cluster and so I enabled...
I configured the VMs like this:
Device: SCSI
SCSI Controller: "VirtIO SCSI Single"
Cache: Write back
Async IO: threads
Discard and SSD emulation flagged
IO thread: unflagged
Stop/Start the VMs and running the fstrim -a command... let's see if the system is stable.
Hi,
I thought I had solved it with your configuration advice but after 8 days a VM crashed again.
These are the messages I find in the system logs:
Jan 30 06:00:00 docker-cluster-101 qemu-ga: info: guest-ping called
...
...
Jan 30 06:11:45 docker-cluster-101 kernel: [649222.434886] INFO: task...
Hi,
I also have a problem with 3 VMs with 64GB of RAM with Ubuntu 22.04 LTS operating system and kernel 5.15.0-91-generic.
Proxmox VE installed on 3 nodes with 512GB of ram and version pve-manager/8.1.3/b46aac3b42da5d15 (running kernel: 6.5.11-7-pve).
You write that the settings of...
Good morning,
an update on my problem that I managed to solve.
I isolated the pve cluster traffic with one VLAN and the docker cluster traffic with a second VLAN.
For a few months the cephfs storage has no longer had any problems and has excellent performance.
Hi,
I use these versions:
Proxmox Backup Server 2.2-6
Proxmox Virtual Environment 7.2-11/b76d3178 (running kernel: 5.15.53-1-pve)
I have a 3 node proxmox clustert with ceph storage and I backup the VMs with PBS using a network share from a NAS with 10Gbps connectivity as a datastore.
If I...
I am using a QNAP NAS and I needed to replace the disks, it went through this procedure:
copied the datastore to a USB disk with NTFS filesystem, I think the original filesystem of the QNAP NAS is ext4
replaced the NAS disks and copied all the datastore from the USB disk to the new NAS share...
I also did not see the old backups after replacing the disk and copying the old data. The problem was due to an incorrect encoding of the filenames that had unknown characters:
after renaming all the folders like /repository_path/vm/XX/folder_bad_character old backups have appeared.
I solved the problem.
In the same folder as the backup there was another subfolder that had nothing to do with the backup, removed that subfolder the garbege collector started regularly.
Hi there
I have 3 Proxmox nodes Supermicro SYS-120C-TN10R connected via Mellanox 100GbE ConnectX-6 Dx cards in cross-connect mode using MCP1600-C00AE30N DAC Cable, no switch.
I followed the guide: Full Mesh Network for Ceph Server and in particular I used Open vSwitch to configure the network...
I have the same problem: the first VM fails the backup due to timeout while the subsequent VMs complete the backup successfully.
Is there any way to retry the backup N times?
On the same storage I have scheduled the backup of 16VM which works regularly, this for example is the log of a backup:
2022-05-15T21:22:40+02:00: starting new backup on datastore 'backup-cluster': "vm/505/2022-05-15T19:22:39Z"
2022-05-15T21:22:40+02:00: download 'index.json.blob' from previous...
The datastore is a shared folder from a QNAP NAS mounted as cifs:
//NAS-IP/backup-cluster /backup-cluster cifs auto,username=backup-cluster,password=XXXXXXX,vers=1.0,uid=34,noforceuid,gid=34,noforcegid 0 0
In the Task Viewer Output I only see these log lines:
2022-05-13T00:00:00+02:00: starting...
Also in my opinion it would be very interesting to have "successfull" or "failed" at the beginning of the email subject.
More generally, a very convenient option would be the personalization of the subject and content of the email.
Thanks
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.