I am trying to use a FreeNAS server for shared storage in a 3-node Proxmox cluster to enable HA and live migration. The iSCSI target is set up on a ZFS pool of 4 identical enterprise-grade SSD's and is reporting "healthy" in FreeNAS. I followed the helpful instructions here...
I have a serious problem, we installed a SAN for 3 nodes FUJITSU 2540 and 1 Stocking Harbor, I created 2 LVMs partitions on Stocking (SAN_VM for VMs and SAN_BACKUP for BACKUP), all Disks of VMs are stored on SAN_VM.
today, after a power failure, I restarted the elements of our SAN...
I have experienced an issue with iscsi target/comstar on OmniOS with Proxmox as initiator (ZFS over ISCSI, zvols).
Seems like STMF service looks crashed after heavy IO load. iSCSI tcp port closes also, target goes to offline. Reboot helps.
When I trying to restart stmf manually...
I have a number of proxmox hosts, and some connected to a Dell/EMC SAN.
On most of the connected hosts i have added an iSCSI connection to the san and then either used the luns directly or added an LVM volume on top.However i'm trying to set up a new host now, and i can add the iSCSI...
I have a mounted ZFS over iSCSI storage device using the LIO plugin.
It is successfully mounted to my nodes
I took a loot into the /Storage/ISCSIPlugin.pm to see how the storage is being mounted. It looks like it is using iscsiadm
But when I try to see the devices
# /usr/bin/iscsiadm --mode...
I was testing my ZFS over ISCSI storage with different settings, compression, encryption, etc on different datasets on my ZFS pool and noticed the disk numbering convention numbers the disks per storage appliance.
The first disk you create will be called vm-vmid-disk-0;
attaching another disk...
Ive just configured my SAN to run as a PVE node/storage appliance with ZFS over iSCSI as a LIO target.
NFS and iSCSI over RDMA is working well with the exception of adding EFI disks to a VM.
Copying EFI vars image failed: command '/usr/bin/qemu-img convert -n -f raw -O raw...
i need to install targetcli-fb in lxc container for iSCSI service.
By installing targetcli-fb on host i got all kernel modules in lxc and targetcli is working without errors.
My first step is to create a file-based backstore but this ended with an error. Unfortunately targetcli writes...
I'm new to Proxmox and am still confused on where to store each VM's application data.
I've read a lot of threads here but most of them focus on storing the actual VM files while I'm interested in where to store each application's specific data.
Example of the data I want to store...
Basically, the title says it all. We rebooted our NAS, but now, the VMs that were stored on there are not able to restart. This is the error we get.
iscsiadm: No portals found
command '/usr/bin/iscsiadm --mode discovery --type sendtargets --portal 1XX.1XX.9.1XX' failed: exit code 21
I have a 4 nodes cluster, sharing an ISCSI LUN with LVM on top of it.
I've created a template that I use to clone VM and add a cloudinit drive.
Everything works fine until I stop the VM (which has it's disk and clouinit on the shared LVM), it then complains that it cannot start :
i have the following problem.
If i delete a vm and later i create a new vm with de same id.
Then i get the following error:
device-mapper: create ioctl on vmdata-vm--105--disk--0LVM-Kv4PpUAk4B73jkQAHVozUVypVNy97MUKP4Eb0l3BbxvgAIXeXV21HYLEl4h3CEHs failed: Device or resource busy
Hello, my iSCSI target is a zvol on a traditional RAID and is NOT using ZFS on FreeNAS. I setup my first iSCSI target as a plain "iSCSI" storage. Should I consider using "ZFS over iSCSI" as a storage setup in this particular setup?
I have MPIO setup on FreeNAS, but have not yet setup MPIO on...
are there any plans for implementing a multipath support for ZFS over iSCSI plugin? Considering ZFS on Linux project is gaining more and more traction.
As far as I'm aware there is no official multipath support yet ( according to https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI...
I have a share storage(Iscsi) for Proxmox-cluster ( 5 nodes).
I have added Iscsi Target to Proxmox. But now, i want to use iscsi as thin Provisioning and i notice the "Lvm thin'' option.
Can i config this ? or i just can config LVM on top the Iscsi Storage like this?
Thanks for your help
Здравствуйте! ISCSI. Все построено по схеме:
при добавлении новой цели в proxmox (5.4). Хранилище становится недоступным, хотя второй интерфейс активен.
Переход на новые пути не происходит!
md3600i (3690bXXXXXXXXXXXXXXX72) dm-5 DELL, MD36xxi
size = 2.0T features = '4...
Virtual Environment 5.4-3
I have iscsi target on linux debian created by tgt. Available via 2 interfaces.
And have two pve nodes in cluster. I added iscsi storage by GUI without problem. But I need add target with 2 portals.
As I know need add target with each portal , and configure...