Search results

  1. H

    VM crash with memory hotplug

    Hmm... that explains a lot ;-) I repeated the steps with this new knowledge and now we have the commit which causes it, I think. root@test:/usr/src/qemu# git bisect good 3b3b0628217e2726069990ff9942a5d6d9816bd7 is the first bad commit commit 3b3b0628217e2726069990ff9942a5d6d9816bd7 Author...
  2. H

    VM crash with memory hotplug

    Thanks for the explanation. I've done the bisect but I don't think it gave usable information (so far). First I tried v2.5.1 v2.6.0 in bisect like you wrote. Every make inbetween worked so I approved with 'git bisect good', at the end it resulted in: root@test:/usr/src/qemu# git bisect good...
  3. H

    VM crash with memory hotplug

    Thank you for your reply and testing. I'm very surprised you couldn't reproduce it, strange. Only thing I can think of is that we're using Dell hardware only, I tested it on Dell PowerEdge R310, R320, R420 and R610. In cluster setups with PVE 4.4 and standalone at our office on PVE 5. Possibly...
  4. H

    VM crash with memory hotplug

    Because no one responds and Proxmox team also doesn't respond to our bug report at https://bugzilla.proxmox.com/show_bug.cgi?id=1107#c16 we doubt if we can keep using PVE in the future. I'm still trying to solve this myself but I really appreciate some help with it. I've made some progress...
  5. H

    [SOLVED] Build QEMU 2.9 package

    You're fast :-) -enable-kvm works perfectly, thanks!
  6. H

    [SOLVED] Build QEMU 2.9 package

    Thank you Fabian! I got it working. But the VM is very slow now. The test VM booted within a few seconds and now it takes over 1 minute, also a dd write test in default Proxmox QEMU is approximately 200MB/s and now 40MB/s (I know dd isn't a benchmarking tool, but in the same VM it performs the...
  7. H

    [SOLVED] Build QEMU 2.9 package

    I would like to test something and I need to replace pve-qemu-kvm 2.9.0-4 by the original QEMU build (without Proxmox patches). How can I do that? I can download http://download.qemu-project.org/qemu-2.9.0.tar.xz and build it from source but it won't replace the installed pve-qemu-kvm package...
  8. H

    VM crash with memory hotplug

    I tested it with PVE 5.0 on a test server in our office. Standalone, only 1 SATA disk for OS and test VM on local-lvm. With above procedure (memory hotplug and test case) a clean Debian 9 install crashes directly after the first test run. Reinstalled the VM with CentOS 7, setup cronjob and...
  9. H

    VM crash with memory hotplug

    Bump... I really think you need to take a few minutes to test this. It's kind of crucial to have VM's that do not crash and with recent pve-qemu-kvm versions VM's will crash with memory hotplug support enabled in Proxmox and Linux guest OS.
  10. H

    VM crash with memory hotplug

    We're having problems with all newer pve-qemu-kvm versions, every version newer than 2.5.19 is causing a unpredictable crash of our VM's. I've setup some tests and reproduce details, can you please check your configuration, with this test you can crash your VM in a few minutes. My test VM...
  11. H

    KVM fixed memory - ballooning?

    Update: OS selected: Linux 4.X/3.X/2.6 Kernel
  12. H

    KVM fixed memory - ballooning?

    In PVE 4.4 when creating a new VM, at the tab Memory there's a new checkbox "Ballooning" when "Use fixed size memory" is selected. I thought ballooning is only used with dynamic memory assignment (https://pve.proxmox.com/wiki/Dynamic_Memory_Management#Ballooning). Also the "Help" button doesn't...
  13. H

    help chose Shared storage FS

    Is this what you're looking for? https://forum.proxmox.com/threads/fiber-channel-storage.3998/
  14. H

    help chose Shared storage FS

    Can you please explain what you're trying to accomplish? I don't understand exactly what you ask. I think you're looking for a redundant storage solution? What hardware do you have for this?
  15. H

    [SOLVED] quick vm storage question

    Hi marszel, Go to your storage definition (Datacenter->Storage) and select "Disk Image" as Content for your local-lvm. Then you can create a VM with local-lvm as storage backend.
  16. H

    Drop cache issue.

    Can you post your /etc/pve/storage.cfg and the output of qm config <vmid>
  17. H

    Proxmox HA Cluster Disaster Recovery

    I've no experience with ZFS on PVE, nor with pve-zsync, but I suppose there is a misunderstanding here. AFAIK pve-zsync only works when you're using ZFS storage in your PVE node(s). TS (fxandrei) doesn't use local ZFS storage, he uses FreeNAS (as ZFS storage) and exports it with NFS for his VM...
  18. H

    Is Ceph too slow and how to optimize it?

    Your SSD's are consumer grade and not fit for the journal job. See https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ and look for your Intel 520. The 9MB/s is really slow, you have 7 osd's (and pad daemons) per server, so 7x9MB/s = 63MB/s...
  19. H

    Is Ceph too slow and how to optimize it?

    A Dell R210 can have 2 2,5" disks and the R210 II can have 4. How can you have 8 ssd's in it? As clarification, the R210 is a 1 socket server, so I assume you have 1 x3460 with 4 cores/8 threads. 4GB RAM is far too little, at least 1GB per OSD, better use 16GB for wetter performance. I think...
  20. H

    Glusterfs fuse slow, how you handle data access in vm?

    Have you tried NFS mounting your GlusterFS volume? Instead of FUSE?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!