nvme partition to kvm quest as storage device

Discussion in 'Proxmox VE: Installation and configuration' started by pwl, Sep 16, 2018.

  1. pwl

    pwl New Member

    Joined:
    Sep 16, 2018
    Messages:
    4
    Likes Received:
    0
    I have a NVME disk partitioned to some paritions.
    One partition used for bcache, and another two i want use inside two KVM guests directly for mysql.

    how can I set it up?
    I see "cloop" options, but can't understand it.
     
  2. Jeff Billimek

    Jeff Billimek New Member

    Joined:
    Feb 16, 2018
    Messages:
    5
    Likes Received:
    3
    I pass an nvme device partition to a KVM guest for kubernetes rook/ceph and it works great. This is how I did it:

    For a given nvme partition, (in this case nvme0n1p4 which is identified as /dev/disk/by-id/nvme-Samsung_SSD_960_EVO_500GB_S3X4NB0K206387N-part4) added the following to the /etc/pve/qemu-server/<vmid>.conf:

    scsi1: /dev/disk/by-id/nvme-Samsung_SSD_960_EVO_500GB_S3X4NB0K206387N-part4,backup=0,discard=on,replicate=0,size=9574M

    This shows up in the guest VM as (via lsblk):

    sdb 8:16 0 9.4G 0 disk

    via parted:

    Code:
    $ sudo parted /dev/sdb print
    Model: QEMU QEMU HARDDISK (scsi)
    Disk /dev/sdb: 10.0GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system  Name           Flags
     1      1049kB  605MB   604MB                ROOK-OSD0-WAL
     2      605MB   1679MB  1074MB               ROOK-OSD0-DB
     
  3. pwl

    pwl New Member

    Joined:
    Sep 16, 2018
    Messages:
    4
    Likes Received:
    0
    ok. guest is started with this parameters but seems ignored.


    I haven't lsblk command in this old linux 2.6.38-16-generic #67~lucid1-Ubuntu
    Other virtio drivers works ok.

    but how find device in guest?

    config:

    agent: 1
    boot: cdn
    bootdisk: virtio0
    cores: 8
    ide2: none,media=cdrom
    memory: 32000
    name: vsup
    net0: virtio=5A:0B:CE:5F:01:78,bridge=vmbr0
    numa: 0
    ostype: l26
    scsihw: virtio-scsi-pci
    smbios1: uuid=af67a777-33f6-4b6b-81ee-b5ac3cded4dd
    sockets: 2
    virtio0: local:101/vm-101-disk-1.qcow2,size=200G
    virtio1: local:101/vm-101-disk-2.qcow2,cache=writeback,size=1500G
    virtio2: raw6:101/vm-101-disk-1.raw,size=138G (another Raw file on nvme partition for test , not fast)
    scsi1: /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_500GB_S466NX0K523886P-part5,backup=0,size=102400M
    (this is new)

    lspci didn't not show it.
    blkid didn't show it .

    00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
    00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
    00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
    00:01.2 USB Controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
    00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
    00:02.0 VGA compatible controller: Technical Corp. Device 1111 (rev 02)
    00:03.0 Unclassified device [00ff]: Qumranet, Inc. Virtio memory balloon
    00:05.0 SCSI storage controller: Qumranet, Inc. Device 1004
    00:08.0 Communication controller: Qumranet, Inc. Virtio console
    00:0a.0 SCSI storage controller: Qumranet, Inc. Virtio block device
    00:0b.0 SCSI storage controller: Qumranet, Inc. Virtio block device
    00:0c.0 SCSI storage controller: Qumranet, Inc. Virtio block device
    00:12.0 Ethernet controller: Qumranet, Inc. Virtio network device
    00:1e.0 PCI bridge: Red Hat, Inc. Device 0001
    00:1f.0 PCI bridge: Red Hat, Inc. Device 0001
     
  4. pwl

    pwl New Member

    Joined:
    Sep 16, 2018
    Messages:
    4
    Likes Received:
    0
    ok, driver changed to virtio
    but not sow good results: raw host 90k write iops.
    virtio partition : write: io=8,192MB, bw=97,365KB/s, iops=24,341
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice