Search results

  1. A

    Kernel-Panic on KVM-Guest on Proxmox 3.4

    Hi, this is not a real kernel panic, this is your vms kernel which send error message, because the storage is not responding or too slow. Maybe your storage is overloaded when this happen ?
  2. A

    VE 4.0 Kernel Panic on HP Proliant servers

    Hi,another way could be to disable motherboard watchdog,to use the hp ilo watchdog by default. edit: /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="nmi_watchdog=0" #update-grub #reboot
  3. A

    VE 4.0 Kernel Panic on HP Proliant servers

    I also found a note here: https://lkml.org/lkml/2014/4/25/184 "hpwdt can not work as expected if hp-asrd is running simultaneously.+Because both hpwdt and hp-asrd update same iLO watchdog timer." Do you have an hp-asrd daemon running ? (maybe from some hp management packages ?)
  4. A

    VE 4.0 Kernel Panic on HP Proliant servers

    ubuntu has also disable it by default. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1432837 But, It could be a problem of ilo configuration when watchdog is enable by hpwdt. Maybe they are a ilo timeout configuration somewhere in ilo ?
  5. A

    PCIe passthrough does not work

    I think we could try qemu 2.3 with kernel 4.2. It could help to see if the problem come from qemu 2.4 or kernel. If it's don't work, try again qemu 2.3 but with kernel 4.1 http://download.proxmox.com/debian/dists/jessie/pvetest/binary-amd64.beta1/pve-kernel-4.1.3-1-pve_4.1.3-7_amd64.deb or...
  6. A

    online fstrim in ext4 for local qcow2 image

    Note that you don't need to mount your fs with discard=on, to use fstrim. if you use discard=on, trim will be done at each delete in your filesystem, and this can be quite slow. It's better to simply add a cron, daily for example, which launch fstrim. (without any discard=on)
  7. A

    pve-sheepdog zookeeper support.

    It'll not be stable until 1.X . Currently they are still cluster format changes between release 0.8 -> 0.9, need to backup/restore for example.
  8. A

    PCIe passthrough does not work

    Proxmox don't add nothing special to the distro, only qemu 2.4 is used instead default qemu 2.1 in debian jessie. you can install proxmox 3.4, qemu 2.2 , on proxmox 4.0 if you want. wget...
  9. A

    PCIe passthrough does not work

    I could be great to see if it's working with proxmox qemu 2.4 + jessie 3.16 kernel for example. I think it's a kvm bug in last kernels, but I would like to be sure that's it's not a regression in qemu.
  10. A

    Error creating an SSD OSD (latest Ceph 0.94-7 on latest Proxmox 4.0)

    you can change the repository in /etc/apt/sources.list.d/ceph.list (https://download.ceph.com/debian-infernalis/) Please read the upgrade procedure: http://ceph.com/releases/v9-2-0-infernalis-released/ because they are some file permissions change with infernalis. (was root:root previously...
  11. A

    Proxmox 4.0 "Use tablet for pointer" doesnt work at vm start

    tablet is disabled when spice is used, because it's cause mouse problem in spice session.
  12. A

    PCIe passthrough does not work

    Hi, which kernel and qemu version ? (proxmox 4.0 use ubuntu willy kernel + qemu 2.4)
  13. A

    Gluster packages form gluster.org

    >>1. just update the gluster and keep using the PVE GUI for the mount If you are happy with gluster 3.7,yes, you can simply add 3.7 debian repository to proxmox http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/jessie/ They are no specific proxmox patches on current packages...
  14. A

    warning: host doesn't support requested feature: CPUID.01H:EDX.ss [bit 27]

    you can try to pass args: -cpu ..... but, it's really not recommanded to use core2duo vcpu on amd host cpu. This can give you vm instability/crash because of missing cpu flags.
  15. A

    LXC: raw to system files?

    you can do it, with specify size=0 for the disk. (but they are no quota like openvz in this case)
  16. A

    Memory display in GUI with Numa

    I think this is because the memory stats are coming from query-balloon qmp, and it's not yet aware of hotpluged memory. It should be fixed in coming qemu 2.3
  17. A

    ceph-performance and latency

    if your restart the ceph osd with /etc/init.d/ceph osd restart, it don't umount /var/lib/ceph/osd/ceph-0 , so the new mount options are not applied. you can do /etc/init.d/ceph osd stop umount /var/lib/ceph/osd/ceph-* /etc/init.d/ceph osd start or do a remount manually mount -o...
  18. A

    Problems with sound and spice during migration

    Could you try with a windows client with virt-viewer 1.0 ? https://fedorahosted.org/released/virt-viewer/virt-viewer-x64-1.0.msi I would like to be sure it's a client problem before try to investage on qemu side.
  19. A

    2 networks cards

    Just to be sure, are the 2 switchs linked together ?
  20. A

    Vm-to-VM communication

    >> Ubuntu 12.04 OpenVZ instance. veth (bridged) or venet (routed)? (veth is known to be slower, but not so slow) I don't known if you can test between 2 kvm guests ?