Fehler bei HA Migration

Discussion in 'Proxmox VE (Deutsch)' started by kev1904, Feb 11, 2019.

  1. kev1904

    kev1904 New Member

    Joined:
    Feb 11, 2019
    Messages:
    6
    Likes Received:
    0
    Wenn ich eine VM online Migrieren möchte, ist die VM anschließend auf dem neuen Node, allerdings wird internal-error angezeigt und im syslog des jeweiligen nodes erscheint folgende Meldung:
    Code:
    [ 2282.430587] *** Guest State ***
    [ 2282.430615] CR0: actual=0x0000000000000030, shadow=0x0000000060000010, gh_mask=fffffffffffffff7
    [ 2282.430649] CR4: actual=0x0000000000002050, shadow=0x0000000000000000, gh_mask=ffffffffffffe871
    [ 2282.430681] CR3 = 0x00000000feffc000
    [ 2282.430697] RSP = 0xfffff800d83b46d8  RIP = 0xfffff800d8825e3f
    [ 2282.430720] RFLAGS=0x00000286         DR7 = 0x0000000000000400
    [ 2282.430744] Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
    [ 2282.430770] CS:   sel=0xf000, attr=0x0009b, limit=0x0000ffff, base=0x00000000ffff0000
    [ 2282.430800] DS:   sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
    [ 2282.430830] SS:   sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
    [ 2282.430861] ES:   sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
    [ 2282.430890] FS:   sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
    [ 2282.430920] GS:   sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
    [ 2282.430950] GDTR:                           limit=0x0000ffff, base=0x0000000000000000
    [ 2282.430979] LDTR: sel=0x0000, attr=0x00082, limit=0x0000ffff, base=0x0000000000000000
    [ 2282.431009] IDTR:                           limit=0x0000ffff, base=0x0000000000000000
    [ 2282.431038] TR:   sel=0x0000, attr=0x0008b, limit=0x0000ffff, base=0x0000000000000000
    [ 2282.431068] EFER =     0x0000000000000000  PAT = 0x0007040600070406
    [ 2282.431093] DebugCtl = 0x0000000000000000  DebugExceptions = 0x0000000000000000
    [ 2282.431120] Interruptibility = 00000000  ActivityState = 00000000
    [ 2282.431143] *** Host State ***
    [ 2282.431158] RIP = 0xffffffffc0cf1ede  RSP = 0xffffa3808de87cb0
    [ 2282.431182] CS=0010 SS=0018 DS=0000 ES=0000 FS=0000 GS=0000 TR=0040
    [ 2282.431206] FSBase=00007f3d573ff700 GSBase=ffff96327fa40000 TRBase=fffffe000002f000
    [ 2282.431235] GDTBase=fffffe000002d000 IDTBase=fffffe0000000000
    [ 2282.431259] CR0=0000000080050033 CR3=0000003fd1520006 CR4=00000000000226e0
    [ 2282.431285] Sysenter RSP=fffffe000002e200 CS:RIP=0010:ffffffff94c01a80
    [ 2282.431311] EFER = 0x0000000000000d01  PAT = 0x0407050600070106
    [ 2282.431334] *** Control State ***
    [ 2282.431349] PinBased=0000003f CPUBased=96a1e9fa SecondaryExec=000004eb
    [ 2282.431374] EntryControls=0000d1ff ExitControls=002fefff
    [ 2282.431396] ExceptionBitmap=00060042 PFECmask=00000000 PFECmatch=00000000
    [ 2282.431422] VMEntry: intr_info=800000b0 errcode=00000000 ilen=00000000
    [ 2282.431447] VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
    [ 2282.431472]         reason=80000021 qualification=0000000000000000
    [ 2282.431497] IDTVectoring: info=00000000 errcode=00000000
    [ 2282.431518] TSC Offset = 0xfffffa7b2fffbace
    [ 2282.432240] TPR Threshold = 0x00
    [ 2282.432980] EPT pointer = 0x0000001fab98701e
    [ 2282.433691] PLE Gap=00000080 Window=00001000
    [ 2282.434386] Virtual processor ID = 0x0001
    Nach neustart der VM ist alles ok
     
  2. dlimbeck

    dlimbeck Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2018
    Messages:
    131
    Likes Received:
    9
    Gibt es Unterschiede bei der Hardware der beiden Nodes?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. kev1904

    kev1904 New Member

    Joined:
    Feb 11, 2019
    Messages:
    6
    Likes Received:
    0
    Nein, es sind alles gleiche Server, gleich viel Ram und gleiche proxmox versionen
     
  4. dlimbeck

    dlimbeck Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2018
    Messages:
    131
    Likes Received:
    9
    Poste bitte die VM Config und Informationen zur Hardware der Nodes.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. kev1904

    kev1904 New Member

    Joined:
    Feb 11, 2019
    Messages:
    6
    Likes Received:
    0
    Intel(R) Xeon(R) CPU E5645
    Storage ist per ceph

    Code:
    agent: 0
    bootdisk: virtio0
    cores: 1
    cpu: host
    memory: 2048
    name: test
    net0: virtio=2E:64:94:81:3F:C5,bridge=vmbr0,tag=25
    numa: 0
    ostype: win7
    scsihw: virtio-scsi-pci
    smbios1: uuid=8b3de9bb-8fad-43df-9a7c-b1a591260a3e
    sockets: 2
    virtio0: Cephei:vm-100-disk-0,cache=writeback,size=50G
    vmgenid: 58855366-2357-492e-90f2-1ccd3699e791
    
     
  6. dlimbeck

    dlimbeck Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2018
    Messages:
    131
    Likes Received:
    9
    Wenn die Hardware identisch ist, kann es sein dass unterschiedliche Firmware Versionen auf den Servern laufen? Der Output von 'pveversion -v' wäre auch noch interessant von den Nodes (von allen beteiligten).
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. kev1904

    kev1904 New Member

    Joined:
    Feb 11, 2019
    Messages:
    6
    Likes Received:
    0
    Code:
    proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
    pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
    pve-kernel-4.15: 5.2-12
    pve-kernel-4.15.18-9-pve: 4.15.18-30
    pve-kernel-4.15.18-7-pve: 4.15.18-27
    pve-kernel-4.15.18-4-pve: 4.15.18-23
    pve-kernel-4.15.17-3-pve: 4.15.17-14
    pve-kernel-4.10.17-2-pve: 4.10.17-20
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-3
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-43
    libpve-guest-common-perl: 2.0-18
    libpve-http-server-perl: 2.0-11
    libpve-storage-perl: 5.0-33
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.0.2+pve1-5
    lxcfs: 3.0.2-2
    novnc-pve: 1.0.0-2
    proxmox-widget-toolkit: 1.0-22
    pve-cluster: 5.0-31
    pve-container: 2.0-31
    pve-docs: 5.3-1
    pve-edk2-firmware: 1.20181023-1
    pve-firewall: 3.0-16
    pve-firmware: 2.0-6
    pve-ha-manager: 2.0-5
    pve-i18n: 1.0-9
    pve-libspice-server1: 0.14.1-1
    pve-qemu-kvm: 2.12.1-1
    pve-xtermjs: 1.0-5
    qemu-server: 5.0-43
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.12-pve1~bpo1
    
    Code:
    proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
    pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
    pve-kernel-4.15: 5.2-12
    pve-kernel-4.15.18-9-pve: 4.15.18-30
    pve-kernel-4.15.18-7-pve: 4.15.18-27
    pve-kernel-4.15.18-4-pve: 4.15.18-23
    pve-kernel-4.15.17-3-pve: 4.15.17-14
    pve-kernel-4.10.17-2-pve: 4.10.17-20
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-3
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-43
    libpve-guest-common-perl: 2.0-18
    libpve-http-server-perl: 2.0-11
    libpve-storage-perl: 5.0-33
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.0.2+pve1-5
    lxcfs: 3.0.2-2
    novnc-pve: 1.0.0-2
    proxmox-widget-toolkit: 1.0-22
    pve-cluster: 5.0-31
    pve-container: 2.0-31
    pve-docs: 5.3-1
    pve-edk2-firmware: 1.20181023-1
    pve-firewall: 3.0-16
    pve-firmware: 2.0-6
    pve-ha-manager: 2.0-5
    pve-i18n: 1.0-9
    pve-libspice-server1: 0.14.1-1
    pve-qemu-kvm: 2.12.1-1
    pve-xtermjs: 1.0-5
    qemu-server: 5.0-43
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.12-pve1~bpo1
    Code:
    proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
    pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
    pve-kernel-4.15: 5.2-12
    pve-kernel-4.15.18-9-pve: 4.15.18-30
    pve-kernel-4.15.18-7-pve: 4.15.18-27
    pve-kernel-4.15.18-4-pve: 4.15.18-23
    pve-kernel-4.15.17-3-pve: 4.15.17-14
    pve-kernel-4.10.17-2-pve: 4.10.17-20
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-3
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-43
    libpve-guest-common-perl: 2.0-18
    libpve-http-server-perl: 2.0-11
    libpve-storage-perl: 5.0-33
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.0.2+pve1-5
    lxcfs: 3.0.2-2
    novnc-pve: 1.0.0-2
    proxmox-widget-toolkit: 1.0-22
    pve-cluster: 5.0-31
    pve-container: 2.0-31
    pve-docs: 5.3-1
    pve-edk2-firmware: 1.20181023-1
    pve-firewall: 3.0-16
    pve-firmware: 2.0-6
    pve-ha-manager: 2.0-5
    pve-i18n: 1.0-9
    pve-libspice-server1: 0.14.1-1
    pve-qemu-kvm: 2.12.1-1
    pve-xtermjs: 1.0-5
    qemu-server: 5.0-43
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.12-pve1~bpo1
    
    Code:
    proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
    pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
    pve-kernel-4.15: 5.2-12
    pve-kernel-4.15.18-9-pve: 4.15.18-30
    pve-kernel-4.15.18-4-pve: 4.15.18-23
    pve-kernel-4.15.17-3-pve: 4.15.17-14
    pve-kernel-4.10.17-2-pve: 4.10.17-20
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-3
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-43
    libpve-guest-common-perl: 2.0-18
    libpve-http-server-perl: 2.0-11
    libpve-storage-perl: 5.0-33
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.0.2+pve1-5
    lxcfs: 3.0.2-2
    novnc-pve: 1.0.0-2
    proxmox-widget-toolkit: 1.0-22
    pve-cluster: 5.0-31
    pve-container: 2.0-31
    pve-docs: 5.3-1
    pve-edk2-firmware: 1.20181023-1
    pve-firewall: 3.0-16
    pve-firmware: 2.0-6
    pve-ha-manager: 2.0-5
    pve-i18n: 1.0-9
    pve-libspice-server1: 0.14.1-1
    pve-qemu-kvm: 2.12.1-1
    pve-xtermjs: 1.0-5
    qemu-server: 5.0-43
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.12-pve1~bpo1
    
     
  8. dlimbeck

    dlimbeck Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2018
    Messages:
    131
    Likes Received:
    9
    Wäre es möglich als CPU den Default (kvm64) zu testen, statt 'host'?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    kev1904 likes this.
  9. kev1904

    kev1904 New Member

    Joined:
    Feb 11, 2019
    Messages:
    6
    Likes Received:
    0
    Jap, mit dem kvm64 klappt es wunderbar
     
  10. dlimbeck

    dlimbeck Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2018
    Messages:
    131
    Likes Received:
    9
    Anscheinend sind die Nodes dann nicht komplett gleich (unterschiedliche Hardware, Firmware, etc.). Wird Nested Virtualization benötigt? Falls nicht würde ich einfach bei 'kvm64' bleiben.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  11. kev1904

    kev1904 New Member

    Joined:
    Feb 11, 2019
    Messages:
    6
    Likes Received:
    0
    Also die Cpu's sind in allen Nodes gleich. Ist aber auch nicht weiter schlimm, wir nutzen einfach Kvm64. Danke für die Hilfe
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice