Search results

  1. cpzengel

    iSCSI Multipath Prio

    please reade above. two networks. san cant deactivate iscsi on lan!!!
  2. cpzengel

    iSCSI Multipath Prio

    as you can see the portal delivers all routes to available paths, but no prios
  3. cpzengel

    iSCSI Multipath Prio

    to make clear again. both lans are seperated by a correct netmask. but iSCSI Initiator of Proxmox prefers the obvios correct slow Path. But I cant change in SAN because its Rubbish GUI!
  4. cpzengel

    iSCSI Multipath Prio

    Update: The wanted Solution ist to offline the unwanted Drive echo offline > /sys/block/sdc/device/state echo 1 > /sys/block/sdc/device/delete Those Commands will offline end remove the unwanted iSCSI Path The Aim is to permanently set this Behavior or setup a Active/Passive Constellation...
  5. cpzengel

    iSCSI Resize

    Hi, how can I manage resize of iSCSI Volumes After expanding the LUN on Storage the new size is not recognized by PVE. Restarting Multipathtools is helping to recognize the new size. Resizing iSCSI is not possible. Only workaround was to detach and reattach the Disk Is there a better Practice...
  6. cpzengel

    VMDK to iSCSI

    Is it possible to convert VMDK to iSCSI via Move Disk? I´m getting Errors create full clone of drive virtio1 (SRVVW-253:249/SRVVW_3.vmdk) TASK ERROR: storage migration failed: error with cfs lock 'storage-RD253': can't allocate space in iscsi storage Any Workarounds if not? cheers Chriz
  7. cpzengel

    iSCSI Multipath Prio

    routing is fine the problem is that you add the portal and not the targets or paths itself I added the Portal with the SAN IP. If I scan the SAN IP it delivers all Paths. Problem is that the 192.x Path is used exclusively # pvesm iscsiscan -portal 172.16.1.249...
  8. cpzengel

    iSCSI Multipath Prio

    192.x 1gbit lan 172.x 10gbit san But it takes lan even I have chosen san portal
  9. cpzengel

    iSCSI Multipath Prio

    We have two networks and it prefers the slower :(
  10. cpzengel

    iSCSI Multipath Prio

    Hi my stupid Readydata delivers Multipath on iSCSI and prefers the 1GBit How can I bind my iSCSI give priorty to my 10Gbit Interface. Even when I set the Portal to the 10Gbit it uses the LAN Interface. Its not an Option to change on SAN Readydata deserved to die! Cheers
  11. cpzengel

    Cant add Node to Cluster

    ssh-copy-id root@clusternode in advance brought the solution
  12. cpzengel

    Cant add Node to Cluster

    what do you mean the source side? cluster node or machine to be added? i already updated the cluster node, but did not reboot until now # pvecm add 192.168.50.254 --use_ssh unable to copy ssh ID: exit code 1
  13. cpzengel

    Cant add Node to Cluster

    Cluster created on pve-manager/5.1-46/ae8241d4 (running kernel: 4.13.4-1-pve) Want to add to Cluster pve-manager/5.1-49/1e427a54 (running kernel: 4.13.16-2-pve) # pvecm add 192.168.50.254 Please enter superuser (root) password for '192.168.50.254'...
  14. cpzengel

    ZFS Replication LXC Container fails

    i set aclmode to off, still same behavior rpool off rpool/Backup off rpool/ROOT off rpool/ROOT/pve-1 off rpool/Replica off rpool/Replica/CONTAINER off rpool/data...
  15. cpzengel

    Upgrade 4.2 > 4.4 > 5.1 fails in ZFS Utils?

    thanks. i rolled back via zfs and retried problem was that i used the v5 no-sup repo for the fist upgrade or something like that second attempt worked fine
  16. cpzengel

    Upgrade 4.2 > 4.4 > 5.1 fails in ZFS Utils?

    Its the secon machine i am upgrading this way repo is no-subscripton, but i had similar also with root@pve1:~# apt dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt --fix-broken install' to correct these...
  17. cpzengel

    NoVNC Problem 5.1-32

    its crazy. i had the problem for a couple of days. also i rebooted the machine many times. in the meantime its back again working!!!
  18. cpzengel

    Again bad ZFS Performance

    So in the moring I had 6GB ARC, now 2GB and horrible Peformance. It happened obviously when I stopped a VM with a ZVOL Datastore. After reboot 6GB went bak proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve) pve-manager: 5.1-41 (running version: 5.1-41/0b958203) pve-kernel-4.13.13-2-pve...