Hello guys.
After struggling with this problem fro a year in OVH datacenter i finally might have something !
Standard options in sysctl are:
vm.dirty_background_ratio = 10
vm.dirty_ratio = 20
And when my SERVER HAVE 265GB or DDR memory !!! - then :
256 GB * 20% = 50 GB
So in that case if i am...
Hi guys.
The howto form proxmox wiki disappeared and i cant het drbd9 working on my 2 node cluster + 3 as witness.
So far i made:
vgcreate drbdpool /dev/sdb1
i added drbdpool in proxmox gui and selected as shared.
Now when i am creating new VM or LXC, i dont see image cloned to the second...
/etc/pve/nodes
Search in all nodes like
/etc/pve/node1/qemu-server
/etc/pve/node2/qemu-server
for this VM config
it should be somewere :)
This is Proxmox cluster or one server ?
Search for this config somewere :)
your MV id is 102 right ?
P.S - i am using Proxmox 4.2 and i adive you to use it also :)
Another approach:
You can remove disk record from VM config
open thix VM config:
nano /etc/pve/qemu-server/102.conf
delete line cointaining this disk something like ide0:.... , save and again try to delete whole VM :)
also check...
Sure.
HP ProLiant DL380 G7
Smart Array P410i ( 05:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01))
2x 250GB ssd Crucial MX200 in Raid1
Created 2 virtual disks /dev/sda for proxmox and /dev/sdb for drbd
Other servers (but all others are Dell) but with...
More screenshots.
Anybodo know why is this happening ?
As i see, 10MB/s makes ssd used at 100%.
Why ?
On node2 there is MOVE DISK from NAS to internal DRBD on ssd.
Node4 wchich is connected with node2 with DRBD
IO delay is still unsolved.
But i solved drbd sync speed :)
https://forum.proxmox.com/threads/settings-drbd-doesnt-presist-after-reboot-global_common-conf.27955/
THX !!!!
I finally solved it :)
Scenario:
Fresh booted nodes.
I am creating new disk via GUI. Syncing starts and presist at about 20-30mbits/s
Then i just simply execute this command to only one node (it will automaticly send to other nodes too)
you can monitor if changes apply by:
watch 'cat...
Hi,
I have one cluster with really low fsync rate and i have some problems with high IO while converting qcow to raw on it. And also with different operations.
Specs:
Block size is 4K , Ext4, problem is also on LVM
05:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers...
Hi,
Settings written to global_common.conf doesnt work:
root@node2:/etc/drbd.d# cat global_common.conf
# DRBD is the result of over a decade of development by LINBIT.
# In case you need professional services for DRBD or have
# feature requests visit http://www.linbit.com
global {...
In this file if you add red text, you will se resource in gui :)
root@node2:/etc/pve# cat storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl
shared
maxfiles 0
lvmthin: local-lvm
vgname pve
thinpool data
content rootdir,images...
Not completly solved.....
This file
/var/lib/drbd.d/drbdmanage_global_common.conf
after reboot is rewritten with empy.
Where can i add tthose settings to make it pernament ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.