I’m currently installing 3 pve hosts in cluster mode with an iSCSI EMC array, everything is redundant : 2 ToR switches for the servers , 2 switches for the iscsi traffic. We’ve chosen this solutions to migrate from ESX (too expensive).
I’ve read a few articles / posts related to PVE...
Hi guys! I have cluster with servers and blade servers. At the moment I'm trying connect Dell VNX7600 datastore over iSCSI, with servers connected without problems, but after connecting to blade and rebooting I got boot fail (Screen attached)
I've tried delete wwid and routes to datastore but...
I am trying to get my Proxmox VE 7.3-4 setup with iSCSI and CHAP. (Without CHAP was easy.) I have multiple iSCSI shares with different CHAP username/password combos. I see whispers that this should be possible, but nothing for multiple username/password combos. Any one have any information?
Hello, I have three servers which I am going to install with version 7.3 of Proxmox, the storage is Dell PowerVault ME5 me5084 and I want to configure ISCSI multipath, although I have seen the wiki there are things that are not clear to me, if you could show me a files examples with some dell...
When it comes to storage, we have been using ZFS over iSCSI in our clusters for years.
Now for a couple of new projects, we require S3 compatible storage and I am unsure about the best way to handle this situation. I am tempted to use MinIO, but I've read mixed reviews about it and Ceph seems...
We have a cluster of 7 nodes. VMs and LXCs are stored on external storage, on a shared LVM, via iSCSI.
We recently had to move from a 2 x 1Gbps iSCSI multipath connection to a 1 x 10 Gbps FO ethernet (single link iSCSI connection, no multiphating).
The iSCSI storage has 4 ethernet ports...
I am setting up a new PVE node as follows :
* 2 x 480 GB SSD for Proxmox VE (Z-RAID1)
* 2 x 2TB for local backups and other storage (RAID-1)
CPU will be Intel Xeon E-2334 @ 3.40GHz, 64GB of RAM.
The load would be about 12 VMs, mostly Debian 11 with light loads, but also a couple of Windows...
Hey everyone, a common question in the forum and to us is which settings are best for storage performance. We took a comprehensive look at performance on PVE 7.2 (kernel=5.15.53-1-pve) with aio=native, aio=io_uring, and iothreads over several weeks of benchmarking on an AMD EPYC system with 100G...
my target isci is connected to a synology nas.
I've 4 lun ready in the nas.
4 lun were available in promox.
Today only three are avalable in promox and one VM is no more working because the VM is in the lost lun
How can I retrieve the lost lun?
Thanks a lot
One of my major clients is considering PVE but I've run into the same snag as earlier this year. The organization has several HPE Nimble SANs and they aren't going away as far as I can tell. HPE Nimble is block storage only, so iSCSI is basically the one and only solution...
I have added a remote ISCSI share from a starwind vsan. Setup was flawless and I can see the drive and space assigned. I also kinda already know the answer (been testing it) but would like a official confirmation from anyone in case I am doing it wrong.
Is it possible to use a ISCSI...
Hi, I'm new to proxmox. I have 4 PVE 6.2-4 nodes. I recently added a pool from my truenas via iscsi and created an LVM inside it. When I go to create a machine or move a disk, I get this, how can I solve it please? *OBS, Look the 2 pictures attached.
WARNING: Not using device /dev/sdf for PV...
We've been struggling moving away from a mal-performing CEPH cluster at a customer site for a little while now, however, our NAS supports iSCSI and NFS, but not CEPH or ZFS over iSCSI. Is there truly no solution to include HA, Live Migration, and PBS Live Backups using an iSCSI...
I started up my proxmox server today and found that I cannot boot any of my VM's. All the iscsi disks don't seem to be working. Here is the output:
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
iscsiadm: Could not add/update...
is there a way to change the user from root to admin when connect ZFS over ISCSI?
command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/10.10.1.100_id_rsa firstname.lastname@example.org zfs list -o name,volsize,origin,type,refquota -t volume,filesystem -Hrp' failed: exit code 255 (500)...
I am currently setting up iSCSI storage with direct LUNs.
However, VM does not detect any partition (the LUNs already have data on them).
I experimented and found that if I create a MBR table (rather than GPT) with a single partition, the VM detects the partition but with an invalid size.
First: Our Hardware:
Node 1: Simpsons with iSCSI Connection for MGM and one for Server (room 1)
-> Connectet SPF to SWITCH-1 (room 1)
Node 2: Flanders with iSCSI Connection for MGM and one for Server(room 2)
-> Connectet SPF to SWITCH-2 (room 2)
1 MAIN Synology NAS (room 1)
so because in the other thread no one was answering and basically ignoring this fact I have to reopen a new one.
So: with the new Kernel PVE-5.15.30 my iSCSI Connections breaks and cant be revived. only the with the "old" 5.13.19-6 kernel my iSCSI connections are working.
I have currently...
I'm in the process of setting up a HP MSA2040 SAN to work with Proxmox. As I just started using a SAN, I have a little over zero knowledge of these things, so there most probably will be errors in my setup (or terms I don't use correctly).
Current status is:
- MSA2040 is connected via...
Since I setup some iscsi storage in Proxmox Version 6, I have a lot of error in my server logs.
In dmesg or /var/log/messages or /var/log/kern.log, I have a lot of:
[2572603.021727] sd 10:0:0:2: [sdn] Unit Not Ready
[2572603.038628] sd 10:0:0:2: [sdn] Sense Key : Illegal Request...