Yesterday I upgraded my Proxmox servers following https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
I face the issue not longer to be able to create new osd's:
# pveceph osd create /dev/sdb -db_dev /dev/nvme1n1
binary not installed: /usr/sbin/ceph-volume
Any ideas?
Hi Fabian,
I switched back on Saturday because I have seen the change and it working fast again.
Thanks a lot for your work on Proxmox - it's a great environment for virtualization!
I did some tests with krbd and the difference is amazing:
krbd: INFO: transferred 32.00 GiB in 62 seconds (528.5 MiB/s)
rbd: INFO: transferred 32.00 GiB in 249 seconds (131.6 MiB/s)
The backup was delete before each backup run to have a full backup. While the update with rbd was running I also...
Hi Fabian,
I waited around 1.5h to let the IMAP cache and processes settle a bit before starting a manual backup after your suggested changes to the VM. It got worse and the IMAP server did not look healty :-(
top - 10:07:52 up 1:38, 1 user, load average: 75.47, 50.88, 24.63
Tasks: 1651...
OK, I checked the load on the Ceph while the backup is runnig (check attached file) and it is not critical. We have 5 nodes. The IMAP Server is on pve04 and we use
rbd_read_from_replica_policy = localize
Just short period of iostat on this VM while no backup is running:
Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util...
Updates on the IMAP server itself - nothing around this date.
Updates on the pbs:
Start-Date: 2022-04-23 06:56:22
Commandline: apt-get install qemu-guest-agent
Install: qemu-guest-agent:amd64 (1:5.2+dfsg-11+deb11u1), liburing1:amd64 (0.7-3, automatic), libglib2.0-0:amd64 (2.66.8-1, automatic)...
Now about the log files:
I took one day before the slow down and one day after and choosed 4 different backup slots per day:
backup_einhorn_Apr20_0000:INFO: transferred 122.87 GiB in 680 seconds (185.0 MiB/s)
backup_einhorn_Apr20_1200:INFO: transferred 173.87 GiB in 922 seconds (193.1 MiB/s)...
Downgrade to 2.1.5-1 did not help
There was no increase on load on the Grafana Dashboards for Ceph - ofc the throughput increases because of the increased backup traffic
Logiles will follow later today
Some graphs about the Dovecot load:
Log
Hi all,
it would be very helpful to filter the list also for a vm id. If you backup hundreds of VMs and you like to see how the duration of a VM is developing you do not have a change to see it.
There is ofc a way by using sed and awk and go through the lof files because the id is part of the...
Hi community,
we have a problem and like to help with all the data you need to debug the situation we are facing:
Since the update of proxmox-backup-client:amd64 from 2.1.5-1 to 2.1.6-1 on 2022-04-25 the backup of a Debian 10 VM with with a huge ext4 filesystem and an IMAP Server (Dovecot...
Setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=10 solved the problem.
It seem's that my SAS-HBA comes up to slow. System SSD for the rpool are connected direct on the mainboard
Just attached a bootlog (serial console)
Flow control was active on the NIC but not on the switch.
Enabling flowcontrol for both direction solved the problem:
flowcontrol receive on
flowcontrol send on
Port Send FlowControl Receive FlowControl RxPause TxPause
admin oper admin oper...
I have a fresh Proxmox installationon 5 servers (Xeon E5-1660 v4, 128 GB RAM) with each 8 Samsung SSD SM863 960GB connected to a LSI-9300-8i (SAS3008) controller used as OSDs for Ceph.
The servers are connected to two Arista DCS-7060CX-32S switches. I'm using MLAG bond (bondmode LACP...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.