Search results

  1. X

    Proxmox with iSCSI Equallogic SAN

    One more question about interface optimization. Did you disable LRO/GRO only on physical interfaces or on linux bridges/vlans too ? I don't know if options are heritable or not. My network topology on proxmox nodes for iSCSI is 2x10gbit<--->bond0<--->bond0.2 (so iSCSI is in vlan 2 connected thru...
  2. X

    Proxmox-ve and Dell EqualLogic storage (iSCSI)

    LACP is only on server to switch side. On switch to storage side there are connected with 4 10Gbit cables (only two ports are active) there is no LACP on storage side. I dont think there is support for LACP on EQL devices. From my understanding there is one group IP (something like virtual IP)...
  3. X

    Proxmox with iSCSI Equallogic SAN

    But only on server side i assume. So you are using group ip for both paths ? I have only one interface (LACP bond from 2x10Gbit) on my servers... But i really dont understand how works failover on EQL with group IP. If i disable one port on EQL, services. What can happen ? is there a down...
  4. X

    Proxmox with iSCSI Equallogic SAN

    Hi How did you managed to connect Proxmox and EQL ? Are you using multipath ? Very thanks for help :)
  5. X

    Proxmox-ve and Dell EqualLogic storage (iSCSI)

    Hi Do you have any idea how can be connected Proxmox-ve and Dell EqualLogic storage (PS4210 iSCSI) ? Currently we are using Group IP for connection to iSCSI targets... without any additional configuration like multipath etc... But i dont think this is proper setup, because of performance and...
  6. X

    Single-file restore and LVM

    Hi Today i tested single file restore with LVM in VM but its not supported. proxmox-file-restore failed: Error: mounting 'drive-scsi0.img.fidx/part/2' failed: all mounts failed or no supported file system (500) Is there any roadmap or supported FSs for single-file restore ? Thanks :)
  7. X

    Failed to activate new LV on vm creation/cloning

    From testing and investigation: multipath is not involved in this problem (tested on multiple clusters with or without multipath) storage HW or data transfer protocol is not involved in this problem (tested on iSCSI and FC on multiple storage types) LVM is not involved in this problem (LVM...
  8. X

    [SOLVED] I broke LVM on iSCSI in a cluster. Can it be fixed?

    Check if you have access to volume from every node in cluster (ACLs on storage). Check disk over fdisk -l on every node Check lvm on every node with pvs and vgs Check if you have selected all nodes in iSCSI/LVM proxmox webui storage settings. :)
  9. X

    [SOLVED] I broke LVM on iSCSI in a cluster. Can it be fixed?

    Then just create new one :) pvcreate /dev/sdc vgcreate new_vol_group /dev/sdc And then use "Existing volume groups" and select created VG. But yes this looks like bug from Proxmox 6.0. In 5.4 you can see volume and you can create new VG from webGUI. But you can try only "pvcreate /dev/sdc"...
  10. X

    Failed to activate new LV on vm creation/cloning

    Same problem in our environment one cluster version 5.4.X (6 nodes,iSCSI/FC with MP) and second cluster version 6.0.X (2 nodes,iSCSI). Looks like bug in VM cleaning. LV is still mounted on some nodes.
  11. X

    iSCSI messages and target warning

    1. Is there any solution for syslog flooding with alua messages (same massages on freenas iSCSI or Dell PS4210) ? Same problem in topic : https://forum.proxmox.com/threads/iscsi-message.44467/ Apr 08 12:00:01 pve002 kernel: sd 11:0:0:0: alua: supports implicit TPGS Apr 08 12:00:01 pve002...
  12. X

    Fiber Channel storage problems

    Hi We have strange problem with Proxmox VE 5.3.8 + Dell SCv2020 over FC. HDDs in storage have 512n sector size but fdisk reports physical sector size 4096 and also I/O size is very high. On top of that we use multipathd. Disk /dev/sdd: 3 TiB, 3298534883328 bytes, 6442450944 sectors Units...
  13. X

    [SOLVED] Cluster over LACP bond

    fixed, bad ip in corosync config and hosts file. My mistake :)
  14. X

    [SOLVED] Cluster over LACP bond

    i try disable IGMP on switch and send multicast packets as broadcast and with omping no success but no problem with iperf and multicast route :/
  15. X

    [SOLVED] Cluster over LACP bond

    Hi We have 2 nodes Dell M630 each node have Qlogic 57810-k dual port connected to Dell MXL stack. These two ports are configured as bond0 (LACP bond). On top of the bond with vlan 20 is configured vmbr interface for corosync/cluster But it looks like multicast traffic is not passing. - IGMP...