Same problem here with iscsi.
After connecting to one iscsi with " iscsiadm -m node --portal X.X.X.X:3260 -T "iqn.XXX-XX.com.equallogic:0-????????????????????????-XXXX" -l ", all iscsi targets work well.
Maybe this can help.
I have a machine where usb is working fine with this (pveversion -v):
pve-manager: 2.0-45 (pve-manager/2.0/8c846a7b)
running kernel: 2.6.32-7-pve
proxmox-ve-2.6.32: 2.0-63
pve-kernel-2.6.32-10-pve: 2.6.32-63
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve2
clvm...
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)
Thanks a lot eMHa, that was the clue.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=566740
After changing that: http://svn.berlios.de/wsvn/iscsitarget/trunk/patches/compat-2.6.31.patch?rev=280&sc=1
I have compiled de...
Ok, don't do this on a production environment.
If you want to use the Nvidia drivers I have to think that what you want is 3D... compiz or something similar?
What you can do is install a debian squeeze with the linux-image-2.6.32-trunk-amd64, then you can install compiz, compile the Nvidia 3d...
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)
I have installed open-iscsi and iscsitarget from squeeze, now when I try to connect the error is "initiator reported error (15 - already exists)". I shuold compile the open-iscsi, maybe another day.
I'll post the solution if i find it...
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)
I'm having same problem prochap had, when using proxmox as an iscsitarget.
when I run a discover on a client all seem to work fine, but when I try to login I get:
Any ideas?
Really I don't know why it doesn't work with that option, but if it is enabled the sco cannot boot the disk. I've made an script that boots the vm without boot=on so I can make it work.
Thank you.
I've installed a ScoUnix5.07 kvm guest. The problem is that if I enable de extboot the vm cannot boot. What proxmox does is:
/usr/bin/kvm -monitor unix:/var/run/qemu-server/169251.mon,server,nowait -vnc unix:/var/run/qemu-server/169251.vnc,password -pidfile /var/run/qemu-server/169251.pid...
Finally I found my problem too. The servers were in a switch. I have changed the switch and it works good, I must have some bad configuration on that switch.
Thankyou for all your help and time.
After removing ip settings on the bond, the vm's cannot reach the lan and the server cannot see the other servers in the cluster. After some minutes I lost the lan.
After adding again the ip settings it works like before.
I think that I must learn more about bonding...
Changing to this:
auto bond0
iface bond0 inet manual
slaves eth0 eth1
I'm having the same problem. Also vm machines can reach my gateway but cannot reach some machines in the same lan.
If I don't use the bond, all works perfectly.
In /etc/modprobe.d/bonding I have...
I'm having the same problem.
This is my /etc/network/interfaces
auto lo
iface lo inet loopback
auto bond0
iface bond0 inet static
address 158.42.169.186
netmask 255.255.254.0
gateway 158.42.168.250
network 158.42.168.0
broadcast...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.