I would say latest DRBD8 version in the Kernel and DRBD9 as DKMS module. The other way around is more difficult to maintain (DKMS will not automatically downgrade from version 9 to 8).
Read this post and follow the links there for DRBD8 Kernel support and DRBD8 storage plugin.
Please read this thread, especially this answer from Linbit:
---> "Not publicly available from LINBIT."
You can get the packages for Debian Jessie here and they work also in my setup with Proxmox 4.4. I supported the Proxmox extensions of the DKMS package.
In the mentioned tread above, you...
I read that the Enterprise repo versions are sightly older than the community repo version to have more stable versions for subscription users. So it might be possible, that the Enterprise repo version hasn't the feature implemented I am asking for.
I would like to know if the Custom...
Do you mean split brain in DRBD8 or Proxmox?
If it is the latter, what parts are effected by that?
If you mean DRBD8 then this can't happen in single primary mode, as long as the DRBD8 sync connection between the servers is working.
It is on purpose to have the cluster without HA in my setup...
I did this now, even my BBU is reported as bad. I changed the batteries and it is still the same, but I know it is full (it reports the voltage). I hope the RAID controller will use the voltage provided by the BBU, even if it things it is bad.
I keep the "directsync" because this description...
I have a TWO node cluster (NO HA!) with DRBD8 backed virtual disks.
Each virtual disk has its own DRBD8 instance and DRBD8 is running in single primary mode.
I wrote a Proxmox Storage Plugin to activate the DRBD8 volume and to make it primary only, when the VM is started.
Today I was...
I switched the RAID controller to WriteBack and tested again. The cache size is 512MB and it is an Fujitsu/Siemens LSI MegaRAID in a Primergy RX600-S4.
On the host RAID (LVM LV 2G on VMs(/dev/sdb1)) I get:
dd if=/dev/zero of=/dev/vg_vms/vm-100-disk-2 bs=512M count=1 oflag=direct...
No I am not, at least I don't how the Page Cache would be used with my test:
dd if=/dev/zero of=/dev/vg_vms/vm-100-disk-2 bs=512M count=1 oflag=direct
536870912 bytes (537 MB) copied, 6,41299 s, 83,7 MB/s
As far as I understand "oflag=direct" means do not use the Page Cache.
The same says...
2x Hitachi HUA723020ALA641/ 1.818 TB
Yes, was a test without the "oflag=direct" option.
I read in several threads, that it is best to use "cache=directsync" when the RAID controller uses a cache.
My test showed 84 MB/s on the host, but 48 MB/s from within the VM. And I don't understand why this...
I am benchmarking my setup and found, that the disk write performance in a VM is appr. 42% less than on the host.
I am using dd for testing. I know it is not ideal, but it should give an
estimation and a lot of others are using it, too.
I have a LSI RAID1 with a BBU configured in...
I think this is the "right" solution. Yesterday I looked to the Storage implementation and I discovered "Storage/Custom".
I am now asking if there is a plugin already available, which can present any block device as a storage to Proxmox? I don't want to reinvent the wheel ;)
This should work...
I try to use DRBD 8.4 with Proxmox 4.3. DRBD 8.4 lacks the Auto-Promote feature, so someone needs to do the primary setting. I don't want to use the primary-primary mode on my disks to be more secure.
I thought to use a script prior to starting a VM and after stopping it, which checks...
No this doesn't work, as I already stated in my initial post:
In /etc/init.d/drbd there is no runlevel defined in "Default-Start:", so it is never started by the init process
and even "update-rc.d drbd defaults" didn't change something.
I recently had a mail conversation with Martin Maurer and he told me that DRBD9 is a "Technology Preview" in Proxmox VE 4.x. So I shouldn't use it until it is stable in a prodction environment.
On the Linbit DRBD mailing list there was recently a message concerning a BUG in DRBD9 with an answer...
Another idea came into my mind.
1) drbdmanage adds functionality to drbdadm. This is not that much, that it wouldn't be possible to write some Perl scripts to do this. This new scripts could be used by a new storage plugin for Proxmox.
2) Proxmox could fork the drdbmanage version before the...
Yes, I knew this already and I even answered to that thread.
But I don't use drdbmanage. I am using the method with LVM as storage on top of DRBD9 ("the old way") and this was working in Proxmox 3.x.
The only thing I would like to know is, how "drbdadm up .." was done there, because it seems to...
In the meantime I learned that DRBD9 is a Kernel module and no daemon.
But just after reboot I get:
root@xxx:/etc/lvm# drbdsetup status --verbose --statistics
# No currently configured DRBD found.
Then I do:
root@xxx:/etc/lvm# drbdadm up vm_disk
And then DRBD is present (/dev/drbdX device...
The license effects only DRBD Manage. DRBD Linux kernel driver and DRBD utils are GPLv2 and this can't be changed easily by Linbit.
If you would remove only the storage driver, all the users which use DRDB 9 & LVM as storage, like it was necessary with DRDB 8.x, could still use Proxmox as a...
I explained in this post, why I am using DRDB9 the "old way", with DRDB9 direct on a disk and a LVM storage on top of it.
Maybe someone can point in the right direction how to activate the DRDB9 daemons at startup.