Search results

  1. M

    iSCSI: Unnütze Einträge bzgl. "no route to host" im daemon.log

    Hi, ist dasselbe Problem wie hier: https://forum.proxmox.com/threads/multipath-iscsi-problems-with-8-1.137953/ und https://forum.proxmox.com/threads/multipath-iscsi-problems-with-8-1.137953/#post-623958 Gibt ein Ticket: https://bugzilla.proxmox.com/show_bug.cgi?id=5173
  2. M

    Problems after upgrade to PVE 8.1.3

    I guess it's the same issue as here: https://forum.proxmox.com/threads/multipath-iscsi-problems-with-8-1.137953/
  3. M

    Multipath iSCSI problems with 8.1

    I agree with you that pve should try to reconnect to the second one. but it should not set the node on- and offline the whole time.
  4. M

    Multipath iSCSI problems with 8.1

    To be clear: The Linux iscsi initiator makes no problems and creating correct multipath sessions. It's the proxmox stuff around which is always restarting all the time if a target is not reachable. THIS shoult not be a correct behavior in case of a failure of a path...
  5. M

    Multipath iSCSI problems with 8.1

    So if a path to one of the targets is down when the node boots up the node will never come up? I guess that's not the right idea.
  6. M

    Multipath iSCSI problems with 8.1

    I have two subnets/physical interfaces for iSCSI communication. 192.168.255.0/24 is the internal ha/drbd network for the storage redundant pair. So it's not reachable from proxmox hosts. Maybe it's an iscsid behavior on the storage which binds to all available ip addresses, including...
  7. M

    Multipath iSCSI problems with 8.1

    Furthermore the problem is: iSCSI multipath und volumes are up as they should be But i.e. "pvesm status" does not work with the same errors: iscsiadm: No portals found iscsiadm: No portals found iscsiadm: No portals found iscsiadm: default: 1 session requested, but 1 already present. iscsiadm...
  8. M

    Multipath iSCSI problems with 8.1

    Hi, in 8.1 was a fix for iSCSI "improvements" for trying to login in all portals delivered by sendtargets. Problem: If you use some specific iSCSI serveres (i.e. open-e) they send you all locally configured IP addresses - there is no way to change this behavior. Even if they are not in use. So...
  9. M

    [SOLVED] WARNING: CPU: 0 PID: ... [openvswitch]

    I can see the same on a newly installed node when trying to start ovs bond on bootup. [ 7.152637] softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0) [ 7.152641] softdog: soft_reboot_cmd=<not set> soft_active_on_boot=0 [ 7.372713] openvswitch...
  10. M

    VMs crashing/freezing on a Cluster during PBS backup operation

    Hi, I guess it's related to https://forum.proxmox.com/threads/freez-issue-with-latest-proxmox-6-3-4-and-amd-cpu.85348/ (and some other threads)
  11. M

    Freez issue with latest Proxmox 6.3-4 and AMD CPU

    Hi, it seems we have also this issue. The crashs start at night when backups are starting...
  12. M

    No reboot with 4.4 pve kernel

    i have just installed proxmox 4.3 on a new dell 630 with the same bnx2x issue. any news about this? is there another workaround except moving to ovs?
  13. M

    test packages in enterprise repo ?

    yes i know. i just wanted to say that the 4.4 kernel and new gui just moved to the no-subscription repo. some days ago i just have been in the test repo.
  14. M

    test packages in enterprise repo ?

    # apt-get update Ign http://ftp.de.debian.org jessie InRelease Hit http://ftp.de.debian.org jessie Release.gpg Hit http://ftp.de.debian.org jessie Release Ign http://enterprise.proxmox.com jessie InRelease Hit http://enterprise.proxmox.com jessie Release.gpg Hit http://enterprise.proxmox.com...
  15. M

    test packages in enterprise repo ?

    at least kernel 4.4 and the new gui has just moved to pve-no-subscribtion. i have just installed and updated a new server. now kernel 4.4 and new gui is used: # pveversion pve-manager/4.1-34/8887b0fd (running kernel: 4.4.6-1-pve)
  16. M

    Disable node in cluster if iSCSI fails

    Hi, is there a way to disable (or just reboot) a node in a HA cluster when loosing all path's to an iSCSI storage?
  17. M

    virtio-scsi over LVM problems

    Hi, it seems the problem was fixed in the most recent version: pve-manager/4.1-15/8cd55b52
  18. M

    virtio-scsi over LVM problems

    so https://forum.proxmox.com/threads/4-0-scsi-vs-virtio-driver.24080/ is also addressed by the patch?! thanks!
  19. M

    4.0 scsi vs virtio driver

    any idea how to solve it?
  20. M

    4.0 scsi vs virtio driver

    Hi, currently I'm seeing the same problem in our lab. (most recent proxmox version - pve-manager/4.1-5/f910ef5c (running kernel: 4.2.6-1-pve)) When using scsi0 + virtio-scsi I can see the whole VG in guest machine (and destroying it when writing on it). Moving to virtio-hdd device it does not...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!