Recent content by puzon

  1. P

    Ceph slow transfer between vdisks on windows vm

    Hi. I have VE 6.4-13, ceph 14.2.22. My pool contains vdisks with data - configuration on screenshot. Machine vm - Windows 2019. When i copy files using smb to workstation speed is about 100mb/s which is great on 1GB connection. But when i want to copy file between two disks (pool1) inside...
  2. P

    Proxmox slow file transfer performance

    Similar problem here i think - Windows VM with two disk on ceph storage - all connected on 10G. Transfer between two disk - about 40Mb/s
  3. P

    windows 2019, virtio configuration

    Sorry for delay - tested and after stop vm, move it to another host and start again - both networks fail. need to uninstall (under windows), rescan and made all configs.
  4. P

    windows 2019, virtio configuration

    No. i need just one - why?
  5. P

    windows 2019, virtio configuration

    Sure. agent: 1 balloon: 2048 bootdisk: virtio0 cores: 4 cpu: IvyBridge hotplug: disk,network,usb,memory,cpu machine: q35 memory: 8192 name: REP-VM102-AD net0: virtio=16:34:E5:61:79:D7,bridge=vmbr1,firewall=1 numa: 1 onboot: 1 ostype: win10 parent: przed_upd_ad2 sata2: none,media=cdrom scsihw...
  6. P

    windows 2019, virtio configuration

    0.1.208 Redhat Virtio Ethernet adapter: Driver date: 30.08.2021 Driver version: 100.85.104.20800 as i mention - it was updated from previous version (from 2020 as far as i remember).
  7. P

    windows 2019, virtio configuration

    Hi. I have few vm's with Windows 2019 std. Everytime i add some hardware like cdrom (on proxmox hardware), shutdown vm and start it again - Red hat virtio network driver is down (screenshot). I need to uninstall it (on windows side), add it again and do all network manual configuration for this...
  8. P

    Ceph pool resize

    Myst, you are my personal Jesus. Thanks - that's work!
  9. P

    Ceph pool resize

    AAron, i added two more osd's, then ceph start rebalancing and after it finish it is looking like this: (But i don't realy understand this) data: pools: 1 pools, 1024 pgs objects: 12.52M objects, 48 TiB usage: 94 TiB used, 37 TiB / 131 TiB avail pgs: 1021 active+clean...
  10. P

    Ceph pool resize

    can we do something ?
  11. P

    Ceph pool resize

    Yes, from beginning it was 2 root@rep-srv10-a:~# ceph osd pool get pool01 size size: 2 root@rep-srv10-a:~# ceph osd pool get pool01 min_size min_size: 1
  12. P

    Ceph pool resize

    yes - osd 20 and 21 yes, i know it should be 3, but i have 2 for now.
  13. P

    Ceph pool resize

    Here you go. root@rep-srv10-a:~# ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 120.07584 - 120 TiB 94 TiB 94 TiB 19 MiB 149 GiB 26 TiB 78.16 1.00 - root default -3 60.03792...
  14. P

    Ceph pool resize

    kindly remind.