On existing vm you can change in gui.
Stop vm
Hardware tab select disk you want to change and remove it. It will not delete the disk it removes it from the running vm and will be listed as an unused disk in hardware tab. Just be sure not to press remove the whole vm button!
Double click unused...
Freebsd seems to be complaining that disk IO is taking too long.
On Linux I see the OOM killer trying too free up ram so that vm was likely doing lots of disk IO to swap. I have disabled swap on nearly every vm because when they start using swap it causes IO issues and performance goes to...
In option you need to change scsi controller to virtio and on hardware choose scsi.
I can conform that trim works with ceph when using virtio scsi. Tried it yesterday.
Using cephfs for storing backups sounds like a great idea, I'd suggest anyone contemplating this to address these backup concerns:
Don't use the same ceph cluster for storing your only source of backups. When your ceph cluster is fubar so are your backups.
Protect your backups from hackers. If...
@ctcknows I think has a point.
I've not had much time to follow pve-devel lately, is this problem being worked on?
If QEMU cannot be modified to resolve the problem then maybe Proxmox should be modified to obtain the backup data using CEPH snapshots instead.
4.7 and 4.8 seem to have had a few issues with OOM such as https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=0a0337e0d1d134465778a16f5cbea95086e8e9e0
I've had the OOM killer happen with swapiness set to 0, 1, 10 and 30
So far increasing the min_free_kbytes has prevented OOM events, also currently have swappiness set to 10.
The only thing using lots of RAM on my system is cache about 11GB when I experienced OOM events.
Only have 16GB RAM on...
I recently upgraded kernel from 4.4.13-2-pve to 4.4.35-1-pve
After the upgrade CEPH osds would randomly get killed by OOM even when there was plenty of RAM available.
Typically nearly all of the free ram was consumed by cache when the OOM event occurs
So far since doing this everything has...
I've had nothing but problems running drbd over infiniband in proxmox 4.x. CEPH works fine tho.
I got some connectX cards to see if that makes a difference but not had time to test them.
I'm hoping my workload will lighten up in January so I can start looking into the problems again.
What cache mode are you using for the virtual disks?
Also please re-run the test but change all to sd so we can see whats going on with each individual disk.
iostat 1 | grep sd"
Do you only have these two 4TB disks in the server?
If you have others please let me know what disks are the 4TB...
You should install some tools to help see whats going on.
apt-get install systat
You can then run "iostat 1 -g all"
It will output the IO to each block device every second and the number you are most concerned with is tps as this represents IOPS.
Pay attention to the all line.
Your disks can...
I can give you some real-world examples using mechanical disks on some older hardware.
I've got three CEPH nodes with 6 disks each, all 7200 RPM SATA disks.
12 are 4TB and the rest are mix of 1/2/3TB disks.
We did move journals to SSD but for a single VM it did not make too much of a difference...
It would be convenient if I could select "reboot" in Proxmox interface/API and Proxmox would issue a shutdown and then a start of the guest.
When QEMU updates come out its necessary to shutdown the VM and start it back up so it can run under the updated code. (I suppose one could live migrate...
I would really like to see Proxmox support the new LUKS driver in Qemu 2.6.
My understanding is that it only works with RAW format but in the future is likely to work with all formats.
Seems like everyone is wanting data encrypted 'at rest' and this would be a really simple way for people to...
drbdmanage simply 'manages' drbd resources.
You should still be familiar with managing drbd else when things go wrong you will be in a terrible position.
on pve212 try to bring up the drbdmanage control resource with this command:
drbdadm up .drbdctrl
Post the output of that command and if...
Yea, drbdmanage is buggy, you likely need to manually perform some actions that drbdmanage failed to do.
Whats the output of "drbdadm status" on both nodes?
I'm not sure if its working as I've not tried but I think in DRBD9 with drbdmanage it can be done online.
limited info but see:
man drbdmanage-resize-volume
Proxmox is still providing 3 month old drbdmanage 0.91 when 0.94 was released just a couple weeks ago.
With the large number of bugs in drbdmanage updating more frequently would be helpful for the few of us trying to use it.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.