Still running on the setup described in post #27.
The bugs I found back then should be fixed by now - but I never tried again...
If you want to try:
- Use a LXC container so you do not need to run keepalive for HA - the restart of those containers is fast enough
- Use NixOS - as it is easy to...
Hi,
you could have a look at my journey for NFS - CephFS on Proxmox...
https://forum.proxmox.com/threads/ha-nfs-service-for-kvm-vms-on-a-proxmox-cluster-with-ceph.80967/
As far as I understand the bugs I encountered should be solved by now in the stable NFS Ganesha version 4 series...
I wrote a script for that to run regulary on each host node...
root@proxmox07:~# cat /usr/local/bin/pve-ha-enable.sh
#!/bin/bash
# Running VMs
VMIDS_R=$(qm list | grep running | awk '{print $1}' | tail -n +1)
# Stopped VMs
VMIDS_S=$(qm list | grep stopped | awk '{print $1}' | tail -n +1)...
We had rbd image snapshots which we deleted but rados objects relating to those snapshots where left behind.
We moved all VM disk images to a local disk storage and had to delete the rbd pool. Which caused further problems. Do not delete stuff on Ceph - just add disks... ;-)
We still miss "Maintenance Mode" coming from VMware with a shared storage.
There is nothing visible like that on the Roadmap. Is something like that planned?
Example:
- Proxmox Node needs hardware maintenance
- Instead of Shutdown klick "Maintenance Mode"
- VMs are migrated away to other...
Trying now to get rid of that broken RBD Pool:
root@proxmox07:~# rados -p ceph-proxmox-VMs ls | head -10
rbd_data.7763f760a5c7b1.00000000000013df
rbd_data.76d8c7e94f5a3a.00000000000102c0
rbd_data.7763f760a5c7b1.0000000000000105
rbd_data.f02a4916183ba2.0000000000013e45...
Looking at
https://forum.proxmox.com/threads/ceph-storage-usage-confusion.94673/
So here is the resut of the four counts
root@proxmox07:~# rbd ls ceph-proxmox-VMs | wc -l
0
root@proxmox07:~# rados ls -p ceph-proxmox-VMs | grep rbd_data | sort | awk -F. '{ print $2 }' |uniq -c |sort -n |wc -l...
I moved all VM disks from the Ceph RBD to a host local ZFS mirror und after migrating it looks like this:
root@proxmox07:~# rbd disk-usage --pool ceph-proxmox-VMs
root@proxmox07:~# ceph df detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 87 TiB 19 TiB 68...
Hi,
it seems that I am running out of space on the OSDs of a five node hyperconverged Proxmox Ceph cluster:
root@proxmox07:~# rbd du --pool ceph-proxmox-VMs
NAME PROVISIONED USED
vm-100-disk-0 1 GiB 1 GiB
vm-100-disk-1...
Could not find a way to send a pull request to the PVE Doc git.
https://pbs.proxmox.com/docs/managing-remotes.html#bandwidth-limit
Typo: " congetsion "
Hi,
we are running an older Proxmox Ceph cluster here and I am currently looking through the disks.
So the OS disks have a Waerout of two percent but the Ceph OSDs still have 0%?!?!?!?
So I looked into the Lenovo XClarity Controller:
So for the OS disks it looks the same, but the Ceph...
I only found out about msecli from this ZFS benchmark thread and back then had not considered it for my benchmarks.
So yes, I was wrong - it should be 4KB NVMe block size.
And the default Ceph block size is 4MB - no idea if Proxmox does changes to the RBDs here.
The data for these graphs is collected by Zabbix agents into a Zabbix DB. From there I used the Zabbix plugin in Grafana. Our decision to use Zabbix is 10 years old and we moved away from Nagios. As long as we are still able to monitor everything (really everything!) in Zabbix we do not even...
I performance-tested from 1 to 4 OSDs per NVMe. It really depends on the system configuration - to drive more OSDs you need more CPU threads.
See this thread and the posts around there.
With my experience so far now I would just create one OSD per device. As Ceph uses a 4M "block size" I would...
So here is the status... upgrade is in planning
root@proxmox07:~# qm status 167 --verbose
blockstat:
scsi0:
account_failed: 1
account_invalid: 1
failed_flush_operations: 0
failed_rd_operations: 0...
Ok, took some time to find out...
proxmox-boot-tool does not prepare the systemd-boot configuration if /sys/firmware/efi does not exist - so to prepare the sda2/sdb2 filesystem for systemd-boot before booting using UEFI I had to remove those checks from /usr/sbin/proxmox-boot-tool.
So I was able to change the disk layout online by doing this:
zpool status
# !!! Be careful with device names and partition numbers!!!!
zpool detach rpool sdb2
cfdisk /dev/sdb # Keep only partition 1 (BIOS), create partition 2 with EFI and partition 3 with ZFS
fdisk -l /dev/sdb
# Should look...
The VMs all reside on Ceph RBDs shared between three nodes. I need to change all three nodes. So from my point of view I think splitting the ZFS mirror, repartition, zfs send the contents and reboot should be the easiest.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.