As i see in my syslog, proxwww forks too many childs and i dont know why (i didnt even logged to web console!)
Oct 20 11:34:25 ve proxwww[4798]: Starting new child 4798
Oct 20 11:34:25 ve proxwww[4799]: Starting new child 4799
Oct 20 13:03:17 ve proxwww[9736]: Starting new child 9736
Oct 20...
IMHO iPhone version is useless: this device is not popular in sysadmins community. Android is more democratic in prices and it is Linux, so its possibility are near unlimited
I hope, Proxmox VE 2.0 will be my best present for Mary Christmas :)
Proxmox is a great virtualization solution, it helps many people to save money, time and nervous, so... thank you guys!
P.S. I have realy strong and potentially useful skils in BASH-coding. How can i help Proxmox VE to be...
Yes, of course. But, again, WHY i HAVE TO use vmbr0 only, because if interface with this name is absent - mirroring and tunneling not working??? I dont find any reasons for hardcoding this interface name. Its terrible and i hope, this will be fixed in the upcoming releases...
Yes my Proxmox host has 2 physical network interfaces and a lot of VLAN-based interfaces. I dont know what interface to use for clustering to achieve best performance, but i think that interface name "vmbr0" tell me nothing about what VLAN it's attached, so i want to replace it with vmbr801 or...
Not so good, i need to use vmbr<VLAN number>, for example, vmbr777, for clustering and i want to have possibility change the interface name for clustering.
So my question is...
What The Hell|Why vmbr0 is hardcoded directly in perl modules???
ve1:/etc/pve/master# fgrep -rH vmbr0...
In Proxmox perl code ip address for tunneling and clustering determined as an address of vmbr0 network interface. But if i want to use any interface different from vmbr0, i cant do it: my handmade changes in source code will be rewriten when upgrading Proxmox and there is no such configuration...
I have to insert some DRBD-related hook to the VM live migration process (just before migraton starts and soon after migration stops), so... how can i do it? I prefer to write my before-migration and after-migration events handler as a BASH script.
I did recover my LVM and DRBD, but again i dont know how. One recomendation is to disable "primary-on-both" temporary and try to recover as it was Primary/Secondary split-brain, not Primary/Primary.
Thanks :)
OK, i did it.
Now on the node1 (primary) i have this state:
ve1:~# cat /proc/drbd
version: 8.3.4 (api:88/proto:86-91)
GIT-hash: 70a645ae080411c87b4482a135847d69dc90a6a2 build by root@oahu, 2010-01-15 11:39:31
0: cs:WFBitMapS ro:Primary/Secondary ds:UpToDate/UpToDate C r----
ns:0...
To activate volume group ("vgchange -a y") i need to deactivate it first ("vgchange -a n"), but i cant do it without halting all virtual machines and maybe all Proxmox services too!
Proxmox team tends to use DRBD in primary/primary mode for shared storage solution (in OVH project for example, right?), but you dont know how to recover from split-brain after temporary network down situation. It's slightly confusing me...
P.S. I've read through all of DRBD mailing lists and...
I've followed all instructions in DRBD documentation, but Proxmox or something else (LVM?) dead-locks /dev/drbd0 and i cant "drbdadm secondary r0". So it's impossible to execute "drbdadm --discard-mydata" and recover from split-brain.
ve1:~# drbdadm secondary r0
0: State change failed: (-12)...
Who knows, what capabilities oVirt has that Proxmox hasnt and vice versa?
Now I see only one real concurent to Proxmox VE: it's oVirt from RedHat. But oVirt has very strange and strongly outdated documentation, so i dont understand, could oVirt be the successful replacement for Proxmox or not? :)
Because i need to stop all virtual machines running on the target node or live-migrate them everytime i made changes to the LVM manually.
You may have a question: why i need to do any changes manually? Because Proxmox cant resize virtual machine images on the LVM, but lvresize does it!
I have two nodes with /dev/drbd0 as a physical volume and volume group "replicated" on it.
When i create or rename logical volume in vg "replicated" on one node, it's appear on another, but only in the output of lvdisplay/lvscan/lvs.
/dev/replicated/ and /dev/mapper stays unchanged and live...
I am already using 2.6.32 kernel with DRBD 8.3.4 and it working unreally slow :(
On a virtual machine (KVM fully virtualized guest)
[zimbra@mailint ~]$ time ( dd if=/dev/zero of=log/DELME bs=1M count=512; sync )
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 190.76...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.