Search results

  1. C

    [SOLVED] ceph-volume gone after upgrade to Quincy?

    Can be solved via apt-get install ceph-volume but this should be done automatically because the GUI depends on it.
  2. C

    [SOLVED] ceph-volume gone after upgrade to Quincy?

    Yesterday I upgraded my Proxmox servers following https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy I face the issue not longer to be able to create new osd's: # pveceph osd create /dev/sdb -db_dev /dev/nvme1n1 binary not installed: /usr/sbin/ceph-volume Any ideas?
  3. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    Hi Fabian, I switched back on Saturday because I have seen the change and it working fast again. Thanks a lot for your work on Proxmox - it's a great environment for virtualization!
  4. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    I did some tests with krbd and the difference is amazing: krbd: INFO: transferred 32.00 GiB in 62 seconds (528.5 MiB/s) rbd: INFO: transferred 32.00 GiB in 249 seconds (131.6 MiB/s) The backup was delete before each backup run to have a full backup. While the update with rbd was running I also...
  5. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    Thanks for your details and fast support on this topic. If i can support with any debugging information or tests please let me know.
  6. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    Hi Fabian, I waited around 1.5h to let the IMAP cache and processes settle a bit before starting a manual backup after your suggested changes to the VM. It got worse and the IMAP server did not look healty :-( top - 10:07:52 up 1:38, 1 user, load average: 75.47, 50.88, 24.63 Tasks: 1651...
  7. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    OK, I checked the load on the Ceph while the backup is runnig (check attached file) and it is not critical. We have 5 nodes. The IMAP Server is on pve04 and we use rbd_read_from_replica_policy = localize
  8. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    Just short period of iostat on this VM while no backup is running: Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util...
  9. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    VM config: #einhorn%3A IMAP/Mail Server agent: enabled=1 boot: order=scsi0;ide2;net0 cores: 2 cpu: host ide2: none,media=cdrom keyboard: de memory: 16384 name: einhorn net0: virtio=00:16:3e:00:18:00,bridge=vmbr0,tag=905 numa: 1 onboot: 0 ostype: l26 rng0: source=/dev/urandom scsi0...
  10. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    Updates on the IMAP server itself - nothing around this date. Updates on the pbs: Start-Date: 2022-04-23 06:56:22 Commandline: apt-get install qemu-guest-agent Install: qemu-guest-agent:amd64 (1:5.2+dfsg-11+deb11u1), liburing1:amd64 (0.7-3, automatic), libglib2.0-0:amd64 (2.66.8-1, automatic)...
  11. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    Now about the log files: I took one day before the slow down and one day after and choosed 4 different backup slots per day: backup_einhorn_Apr20_0000:INFO: transferred 122.87 GiB in 680 seconds (185.0 MiB/s) backup_einhorn_Apr20_1200:INFO: transferred 173.87 GiB in 922 seconds (193.1 MiB/s)...
  12. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    Downgrade to 2.1.5-1 did not help There was no increase on load on the Grafana Dashboards for Ceph - ofc the throughput increases because of the increased backup traffic Logiles will follow later today Some graphs about the Dovecot load: Log
  13. C

    Feature request: ServerAdministration:tasks filter for vm id

    Hi all, it would be very helpful to filter the list also for a vm id. If you backup hundreds of VMs and you like to see how the duration of a VM is developing you do not have a change to see it. There is ofc a way by using sed and awk and go through the lof files because the id is part of the...
  14. C

    Very high iowait since proxmox-backup-client 2.1.6-1

    Hi community, we have a problem and like to help with all the data you need to debug the situation we are facing: Since the update of proxmox-backup-client:amd64 from 2.1.5-1 to 2.1.6-1 on 2022-04-25 the backup of a Debian 10 VM with with a huge ext4 filesystem and an IMAP Server (Dovecot...
  15. C

    [SOLVED] zfs: cannot import rpool after reboot

    Setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=10 solved the problem. It seem's that my SAS-HBA comes up to slow. System SSD for the rpool are connected direct on the mainboard Just attached a bootlog (serial console)
  16. C

    [SOLVED] Ceph & output discards (queue drops) on switchport

    Flow control was active on the NIC but not on the switch. Enabling flowcontrol for both direction solved the problem: flowcontrol receive on flowcontrol send on Port Send FlowControl Receive FlowControl RxPause TxPause admin oper admin oper...
  17. C

    [SOLVED] Ceph & output discards (queue drops) on switchport

    I have a fresh Proxmox installationon 5 servers (Xeon E5-1660 v4, 128 GB RAM) with each 8 Samsung SSD SM863 960GB connected to a LSI-9300-8i (SAS3008) controller used as OSDs for Ceph. The servers are connected to two Arista DCS-7060CX-32S switches. I'm using MLAG bond (bondmode LACP...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!