Hmm... that explains a lot ;-) I repeated the steps with this new knowledge and now we have the commit which causes it, I think.
root@test:/usr/src/qemu# git bisect good
3b3b0628217e2726069990ff9942a5d6d9816bd7 is the first bad commit
commit 3b3b0628217e2726069990ff9942a5d6d9816bd7
Author...
Thanks for the explanation. I've done the bisect but I don't think it gave usable information (so far).
First I tried v2.5.1 v2.6.0 in bisect like you wrote. Every make inbetween worked so I approved with 'git bisect good', at the end it resulted in:
root@test:/usr/src/qemu# git bisect good...
Thank you for your reply and testing. I'm very surprised you couldn't reproduce it, strange. Only thing I can think of is that we're using Dell hardware only, I tested it on Dell PowerEdge R310, R320, R420 and R610. In cluster setups with PVE 4.4 and standalone at our office on PVE 5. Possibly...
Because no one responds and Proxmox team also doesn't respond to our bug report at https://bugzilla.proxmox.com/show_bug.cgi?id=1107#c16 we doubt if we can keep using PVE in the future.
I'm still trying to solve this myself but I really appreciate some help with it.
I've made some progress...
Thank you Fabian! I got it working. But the VM is very slow now. The test VM booted within a few seconds and now it takes over 1 minute, also a dd write test in default Proxmox QEMU is approximately 200MB/s and now 40MB/s (I know dd isn't a benchmarking tool, but in the same VM it performs the...
I would like to test something and I need to replace pve-qemu-kvm 2.9.0-4 by the original QEMU build (without Proxmox patches). How can I do that? I can download http://download.qemu-project.org/qemu-2.9.0.tar.xz and build it from source but it won't replace the installed pve-qemu-kvm package...
I tested it with PVE 5.0 on a test server in our office. Standalone, only 1 SATA disk for OS and test VM on local-lvm.
With above procedure (memory hotplug and test case) a clean Debian 9 install crashes directly after the first test run. Reinstalled the VM with CentOS 7, setup cronjob and...
Bump...
I really think you need to take a few minutes to test this. It's kind of crucial to have VM's that do not crash and with recent pve-qemu-kvm versions VM's will crash with memory hotplug support enabled in Proxmox and Linux guest OS.
We're having problems with all newer pve-qemu-kvm versions, every version newer than 2.5.19 is causing a unpredictable crash of our VM's. I've setup some tests and reproduce details, can you please check your configuration, with this test you can crash your VM in a few minutes.
My test VM...
In PVE 4.4 when creating a new VM, at the tab Memory there's a new checkbox "Ballooning" when "Use fixed size memory" is selected. I thought ballooning is only used with dynamic memory assignment (https://pve.proxmox.com/wiki/Dynamic_Memory_Management#Ballooning). Also the "Help" button doesn't...
Can you please explain what you're trying to accomplish? I don't understand exactly what you ask. I think you're looking for a redundant storage solution? What hardware do you have for this?
Hi marszel,
Go to your storage definition (Datacenter->Storage) and select "Disk Image" as Content for your local-lvm. Then you can create a VM with local-lvm as storage backend.
I've no experience with ZFS on PVE, nor with pve-zsync, but I suppose there is a misunderstanding here. AFAIK pve-zsync only works when you're using ZFS storage in your PVE node(s). TS (fxandrei) doesn't use local ZFS storage, he uses FreeNAS (as ZFS storage) and exports it with NFS for his VM...
Your SSD's are consumer grade and not fit for the journal job. See https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ and look for your Intel 520. The 9MB/s is really slow, you have 7 osd's (and pad daemons) per server, so 7x9MB/s = 63MB/s...
A Dell R210 can have 2 2,5" disks and the R210 II can have 4. How can you have 8 ssd's in it? As clarification, the R210 is a 1 socket server, so I assume you have 1 x3460 with 4 cores/8 threads. 4GB RAM is far too little, at least 1GB per OSD, better use 16GB for wetter performance.
I think...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.