Search results

  1. elurex

    Ceph rbd error - sysfs write failed

    rbd image 'vm-104-disk-1': size 60 GiB in 15360 objects order 22 (4 MiB objects) snapshot_count: 0 id: 433efbc6fb9b03 block_name_prefix: rbd_data.433efbc6fb9b03 format: 2 features: layering op_features: flags...
  2. elurex

    Ceph rbd error - sysfs write failed

    root@gpu01:~# ceph versions { "mon": { "ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)": 3 }, "mgr": { "ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)": 1 }, "osd": { "ceph version...
  3. elurex

    (Linux) GPU passthrough issues

    please try pc-q35-3.1 instead of q35 or i440fx if you are using PVE 6.0
  4. elurex

    Ceph rbd error - sysfs write failed

    I am running PVE 6.0 and on this pve node, I have already started 2 vm with rbd disks to my ceph storage, However, sometime if I need to start third VM using rbd disk, PVE will error with following msg. It only can be solve if I reboot the pve node and then I can start 3 VM or more using rbd...
  5. elurex

    PVE 5.4 live migration issue

    FYI for PVE 6.0 live migration with local-disks and with replicated volume is still not possible root@gpu01:~# qm migrate 800 gpu02 --online --with-local-disks --force 2019-10-01 03:27:01 use dedicated network address for sending migration traffic (10.10.1.51) 2019-10-01 03:27:01 starting...
  6. elurex

    Ceph OSD will not delete

    ceph osd down 4 ceph osd rm 4 ceph auth del osd.4
  7. elurex

    Ceph SSD, do i need "discard" or "ssd emulation" in vm settings?

    It still require kvm to send discard and ceph is capable of handling discard command and pass it to the corresponded osd so enable discard should be enough.
  8. elurex

    CEPH disk usage

    1035.8 mb/s is your throughput.. 1666.53 mb/s for seq single client is very nice I am getting my results from ceph manager dashboard
  9. elurex

    CEPH disk usage

    with 80 OSD you should able to achieve 11 GiB/s, I have 96 OSD setup accross 8 nodes and it is capable of achieving following throughput
  10. elurex

    GPU passthrough tutorial/reference

    machine type needs to be pc-q35-3.1 please remove args: -machine type=q35,kernel_irqchip=on you need to pass pcie 04:00 entire card do you have iommu enabled in /etc/default/grub does your cpu and motherboard support VT-d
  11. elurex

    GPU passthru blank screen

    Please check my settings - with red line highlights I am using PVE 6.0 with NV 2080ti (you need to roughcast entire pcie card including audio and usb port)
  12. elurex

    GPU passthrough tutorial/reference

    I am using PVE 6.0 with NV 2080ti please check my setting with red line highlights You need to passthrough entire PCIe device including audio and also usb
  13. elurex

    GPU passthru blank screen

    pve5.4 or pve6.0? a good place to start with https://pve.proxmox.com/wiki/PCI(e)_Passthrough
  14. elurex

    pvesh error

    JSON text must be an object or array (but found number, string, true, false or null, use allow_nonref to allow this) at /usr/share/perl5/PVE/CLI/pvesh.pm line 125
  15. elurex

    pvesh error

    I keep getting pvesh error complaining json malformat /usr/share/perl5/PVE/CLI/pvesh.pm @ line 125 sub proxy_handler { my ($node, $remip, $path, $cmd, $param) = @_; my $args = []; foreach my $key (keys %$param) { next if $key eq 'quiet' || $key eq 'output-format'; # just to...
  16. elurex

    PVE Online Migration Fails After Update to PVE 5.4-3

    yes I have same error and I realize that I need to move cloud-init drive to local storage and not put them on share storage, then, live migration/HA will work
  17. elurex

    PVE 5.4, cloud init and ceph issue

    I noticed that on PVE 5.4 if you have a vm that uses cloud-init drive and the selected storage is ceph or other shared storage, live migration/HA will fail due to the destination already have the file exist

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!