Search results

  1. M

    [SOLVED] Install PVE on Deb12 broken?

    ah - now it's working. server was fresh installed this morning. thank you
  2. M

    [SOLVED] Install PVE on Deb12 broken?

    Hi Folke, thanks for answering my question. root@pve-2024:~# apt info proxmox-backup-client Package: proxmox-backup-client Version: 3.2.3-1 Priority: optional Section: admin Source: rust-proxmox-backup Maintainer: Proxmox Support Team <support@proxmox.com> Installed-Size: 13,9 MB Depends...
  3. M

    [SOLVED] Install PVE on Deb12 broken?

    Hi, i'm following the guide in the wiki, but struggling with installing the proxmox-ve package: Here is the result: root@pve-2024:~# apt install proxmox-ve postfix open-iscsi chrony Paketlisten werden gelesen… Fertig Abhängigkeitsbaum wird aufgebaut… Fertig Statusinformationen werden...
  4. M

    [SOLVED] Proxmox Corrupt after Reboot (common/sense.pm)

    apt --reinstall install libcommon-sense-perl did the trick
  5. M

    [SOLVED] Proxmox Corrupt after Reboot (common/sense.pm)

    Hi, i had a running Proxmox Installation (Debian 11.9 - PVE 7.4.x) The System was struggling with some network issues, i then decided to reboot it. After reboot - the host does not come back online in proxmox-webinterface. looking into it via ssh gives me the error that pve-daemon is not...
  6. M

    Another CEPH Performance Problem

    i'm still into it. updated the drivers inside the vms and rechecked. Virtio 0.1.165 Drivers under windows server 2012r2 winsat disk -ran -write -drive c Windows-Systembewertungstool > Wird ausgeführt: Featureaufzählung '' > Laufzeit 00:00:00.00 > Wird ausgeführt: Speicherbewertung '-ran -write...
  7. M

    Another CEPH Performance Problem

    in the end, it was a faulty ssd which had terrible iops and r/w performance after removal, speed is back to normal is there a log file for slow osds?
  8. M

    Another CEPH Performance Problem

    funny, i removed some of the ssds and added them right back into the ceph-cluster and suddenly we got performance inside our vms. read-iops before 8-10 - now 400-500.... and backfilling is on the way with 36pgs what is that??? feels like the system is setting the osds in hibernate and to...
  9. M

    Another CEPH Performance Problem

    default (kvm64) linux debian 9 but i have debian 10 also, windows server, windows 10, freebsd
  10. M

    Another CEPH Performance Problem

    did you replaced the ssds to get the performance up again? i'm still wondering why performance in vms is so poor. Tried to update the virtio drivers in the kernel, but didn't find any new version.
  11. M

    Another CEPH Performance Problem

    iops are the same, but the defect ssd was interrupting all vms, now the vms are much faster then before. i think i have to check each ssd. thank you.
  12. M

    The virtual machine failed to start

    one of the pgs are not active since it is too full, i guess on this pg the vm stores its data
  13. M

    Another CEPH Performance Problem

    30 iops is what i got with ceph bench.... 30 iops times 7 (osds in each server) would be amazing.... removed one osd, wiped it and readded it. Turned out that there was another osd that has a latency of about 2000ms. Pulled that out the cluster, now waiting for recovery then i'll retest.
  14. M

    Another CEPH Performance Problem

    Hi, @spirit yes to test the really performance of an underlaying storage you have to make sync=1 otherwise he will use the memory as cache. This will not represent the really performance of the storage. I'm not expecting 4k-IOPS, but all of our vms are slow. An Windows-Update took 3 hours to...
  15. M

    Another CEPH Performance Problem

    do i have to change anything in the config of the vm? result after enabling krbd, stop and start of the vm : fio --ioengine=psync --filename=/root/test --size=1G --time_based --name=fio --group_reporting --runtime=60 --direct=1 --sync=1 --rw=write --bs=4M --numjobs=4 --iodepth=32 fio: (g=0)...
  16. M

    Another CEPH Performance Problem

    krbd disabled fio --ioengine=psync --filename=/root/test --size=1G --time_based --name=fio --group_reporting --runtime=60 --direct=1 --sync=1 --rw=write --bs=4M --numjobs=4 --iodepth=32 fio: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync...
  17. M

    Another CEPH Performance Problem

    did read somewhere that it may has to do something with the virtio driver i'm using. added a scsi0 device and retested: fio --rw=write --name=test --size=20M --direct=1 test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 fio-3.12 Starting 1...
  18. M

    Another CEPH Performance Problem

    i see 64 as maximum. i migrated most of the vms, but this special one has to get some extra memory so i shut it down, moved it to another already upgraded server and started it there. it is RBP bootdisk: virtio0 cores: 8 description: BLABLABLABLA ide2: none,media=cdrom memory: 16384 name...
  19. M

    Another CEPH Performance Problem

    rados bench 600 write -b 4M -t 16 -p test hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 600 seconds or 0 objects Object prefix: benchmark_data_pve5-1_589524 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 0...
  20. M

    Another CEPH Performance Problem

    with this hardware specs i set it up with 1024pgs is that maybe the problem?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!