Search results

  1. T

    VM I/O Performance with Ceph Storage

    @dragon2611 Do u have any Idea, how long it should take when things calm down again? Until now the latencies have horrible values: VMs with Disks at the NVMes are near to being unusable
  2. T

    VM I/O Performance with Ceph Storage

    OK, then we just wait a while...fingers crossed ;)
  3. T

    VM I/O Performance with Ceph Storage

    We just use commands like this to enable this Option at OSD with NVMe Disks: ceph config set osd.4 bdev_enable_discard true BUT now things even get much worse... immediatly latencies increases on these disks:
  4. T

    VM I/O Performance with Ceph Storage

    -> The disks are those: Crucial P2 CT1000P2SSD8 (1TB) Crucial P2 CT2000P2SSD8 (2TB) Connected via PCIe Adapter Card to PCIe 4x Slots -> iperf gave this: -> Setting the VM Disk Cache to "WriteBack" doesn't really change anything. BUT: Setting this to "WriteBack (unsafe)" massively increases...
  5. T

    VM I/O Performance with Ceph Storage

    @shanreich Thx for the hint, but this was the original setup we used till some days ago. (Meaning: we HAD the public network also at the 40Gb Nics). During the try - and - error investigations in the last weeks, this was the last we changed: Set the public network from 172.20.81.0/24 to...
  6. T

    VM I/O Performance with Ceph Storage

    (Continue: ) Test VM Config: Installing Gimp at this VM lasts 3-4 mins, just made a screencast of it and uploaded to a dropbox: https://www.dropbox.com/s/ws7hmxzdhpgtuaa/InstallGimp.webm?dl=0 Notice the "Extraction" or "Config" Phase... this is original speed, not SlowMo ;) BTW: For...
  7. T

    VM I/O Performance with Ceph Storage

    Hi everybody, a while ago we set up a three node Proxmox cluster and as storage backend we use the built in ceph features. After a while we noticed a strong decreasing of I/O Perfomance in the VMs when it comes to writes of small files. Writing a single big file at once seems to perform quit...
  8. T

    Proxmox GUI: Task List shows only old entries

    Hi everybody, just came accross this weird behaviour of the Task List at the bottom in the GUI in a Proxmox 7.3-4 installation of 3 cluster nodes. When opening it, it doesn't show current (or even shortly occured) tasks, but just tasks from many hours ago: This Screenshot was taken today...
  9. T

    [SOLVED] proxmox-backup-client in docker: Subsequential backups never reuse data?

    Indeed, with --tmpfs option in the docker command it looks MUCH better :) root@container:/# proxmox-backup-client backup backuptest.pxar:/home/backuptest Starting backup: host/container/2022-04-04T12:21:40Z Client name: container Starting backup protocol: Mon Apr 4 12:21:40 2022 No previous...
  10. T

    [SOLVED] proxmox-backup-client in docker: Subsequential backups never reuse data?

    If this where the case I would really wonder what would have changed these metadata a few seconds after the first backup: First Backup start Time: 2022-04-01T16:03:17Z Second Backup start time: 2022-04-01T16:03:33Z I downloaded the two backuptest.pxar(.didx) files via the PBS Web GUI and...
  11. T

    [SOLVED] proxmox-backup-client in docker: Subsequential backups never reuse data?

    Hi everybody, I have an older Debian Stretch Server which can't simply be upgraded to a newer Debian Version. There I want to use proxmox-backup-client for backing up some files to a running PBS. As proxmox-backup-client can't be installed on Stretch cause of missing dependencies I gave a...
  12. T

    "Fun" with .pxarexclude

    Indeed it now works with this content: /* /backuppartly/* !backupme !backupmetoo !backuppartly !backuppartly/backupme As you said correctly one has first to exclude everything under backuppartly (/backuppartly/*) and then include what is needed in two lines: (!backuppartly...
  13. T

    "Fun" with .pxarexclude

    @Fabian_E : Next challenge: when using your solution, including lines containing slashes doesn't seem to work at all, not just leading ones. Look at this example: /opt/backuptest# tree . ├── backupme │ └── file0 ├── backupmetoo │ ├── file0 │ ├── subfolder0 │ │ └── file0 │ └──...
  14. T

    "Fun" with .pxarexclude

    @Fabian_E : Thx a lot.. indeed this seems to work at a first glance... with this content in .pxarexclude this is the resulting backup and it looks like what should be achieved:
  15. T

    "Fun" with .pxarexclude

    Hi everybody, for hours now I try to achieve a - for me - simple backup use case using .pxarexclude: -> Backup just one or two selected, named folders and all its subfolders, ignore every other. Assume this tree: /opt/backuptest# tree . ├── backupme │ └── file0 ├── backupmetoo │ ├──...
  16. T

    VM Restore: /var/tmp zu klein -> Änderbar?

    Sorry, hat sich erledigt... hatte mich von der Ausgabe irritieren lassen... es war im Ziel Storage tatsächlich zu wenig Platz :D
  17. T

    VM Restore: /var/tmp zu klein -> Änderbar?

    Hallo zusammen, in einem Proxmox VE 7.1-10 wollte ich gerade eine VM wiederherstellen, was fehl geschlagen ist. Ein genauer Blick auf die Ausgabe verrät, dass das Entpacken der virtuellen Disk(s) zunächst nach /var/tmp gemacht wird: restore vma archive: lzop -d -c...
  18. T

    Windows 10 Guest: Qemu Guest Agent Update schlägt fehl

    Diese Lösungen waren die ersten, die ich fand, als ich nach dem Problem gesucht habe. Habe beide Optionen mehrfach probiert, ohne Erfolg. Genau wie bei dem Thread Ersteller übrigens, auch bei ihm hats nichts gebracht. Und da es sich ja hier um eine VM und nicht um ein HP Notebook handelt, wie...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!