Hi,
ist dasselbe Problem wie hier: https://forum.proxmox.com/threads/multipath-iscsi-problems-with-8-1.137953/
und https://forum.proxmox.com/threads/multipath-iscsi-problems-with-8-1.137953/#post-623958
Gibt ein Ticket: https://bugzilla.proxmox.com/show_bug.cgi?id=5173
To be clear: The Linux iscsi initiator makes no problems and creating correct multipath sessions. It's the proxmox stuff around which is always restarting all the time if a target is not reachable. THIS shoult not be a correct behavior in case of a failure of a path...
I have two subnets/physical interfaces for iSCSI communication. 192.168.255.0/24 is the internal ha/drbd network for the storage redundant pair. So it's not reachable from proxmox hosts.
Maybe it's an iscsid behavior on the storage which binds to all available ip addresses, including...
Furthermore the problem is: iSCSI multipath und volumes are up as they should be
But i.e. "pvesm status" does not work with the same errors:
iscsiadm: No portals found
iscsiadm: No portals found
iscsiadm: No portals found
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm...
Hi,
in 8.1 was a fix for iSCSI "improvements" for trying to login in all portals delivered by sendtargets.
Problem: If you use some specific iSCSI serveres (i.e. open-e) they send you all locally configured IP addresses - there is no way to change this behavior. Even if they are not in use. So...
I can see the same on a newly installed node when trying to start ovs bond on bootup.
[ 7.152637] softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)
[ 7.152641] softdog: soft_reboot_cmd=<not set> soft_active_on_boot=0
[ 7.372713] openvswitch...
yes i know. i just wanted to say that the 4.4 kernel and new gui just moved to the no-subscription repo. some days ago i just have been in the test repo.
at least kernel 4.4 and the new gui has just moved to pve-no-subscribtion.
i have just installed and updated a new server. now kernel 4.4 and new gui is used:
# pveversion
pve-manager/4.1-34/8887b0fd (running kernel: 4.4.6-1-pve)
Hi,
currently I'm seeing the same problem in our lab. (most recent proxmox version - pve-manager/4.1-5/f910ef5c (running kernel: 4.2.6-1-pve))
When using scsi0 + virtio-scsi I can see the whole VG in guest machine (and destroying it when writing on it). Moving to virtio-hdd device it does not...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.