Hi, I'm new to proxmox. I have 4 PVE 6.2-4 nodes. I recently added a pool from my truenas via iscsi and created an LVM inside it. When I go to create a machine or move a disk, I get this, how can I solve it please? *OBS, Look the 2 pictures attached.
WARNING: Not using device /dev/sdf for PV...
We've been struggling moving away from a mal-performing CEPH cluster at a customer site for a little while now, however, our NAS supports iSCSI and NFS, but not CEPH or ZFS over iSCSI. Is there truly no solution to include HA, Live Migration, and PBS Live Backups using an iSCSI...
I started up my proxmox server today and found that I cannot boot any of my VM's. All the iscsi disks don't seem to be working. Here is the output:
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
iscsiadm: Could not add/update...
is there a way to change the user from root to admin when connect ZFS over ISCSI?
command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/10.10.1.100_id_rsa email@example.com zfs list -o name,volsize,origin,type,refquota -t volume,filesystem -Hrp' failed: exit code 255 (500)...
I am currently setting up iSCSI storage with direct LUNs.
However, VM does not detect any partition (the LUNs already have data on them).
I experimented and found that if I create a MBR table (rather than GPT) with a single partition, the VM detects the partition but with an invalid size.
First: Our Hardware:
Node 1: Simpsons with iSCSI Connection for MGM and one for Server (room 1)
-> Connectet SPF to SWITCH-1 (room 1)
Node 2: Flanders with iSCSI Connection for MGM and one for Server(room 2)
-> Connectet SPF to SWITCH-2 (room 2)
1 MAIN Synology NAS (room 1)
so because in the other thread no one was answering and basically ignoring this fact I have to reopen a new one.
So: with the new Kernel PVE-5.15.30 my iSCSI Connections breaks and cant be revived. only the with the "old" 5.13.19-6 kernel my iSCSI connections are working.
I have currently...
I'm in the process of setting up a HP MSA2040 SAN to work with Proxmox. As I just started using a SAN, I have a little over zero knowledge of these things, so there most probably will be errors in my setup (or terms I don't use correctly).
Current status is:
- MSA2040 is connected via...
Since I setup some iscsi storage in Proxmox Version 6, I have a lot of error in my server logs.
In dmesg or /var/log/messages or /var/log/kern.log, I have a lot of:
[2572603.021727] sd 10:0:0:2: [sdn] Unit Not Ready
[2572603.038628] sd 10:0:0:2: [sdn] Sense Key : Illegal Request...
Hello, some time ago I had a server die suddenly, I restored a backup into a temporary server and decided to go with shared storage, now im moving to a NAS with iSCSI and LVM, and need to move those VMs from the temporary Proxmox into the new one, of course with minimal downtime but can not...
hi all ,
I have proxmox server , this server connected to san storage to get 11TB ,
my problem is he read 4 disks with same storage , for example
sdb 8:16 0 12T 0 disk
└─sdb1 8:17 0 12T 0 part
I've found a few threads about similiar problem but none of them have a solution.
We have a 3 node cluster with multipathed iSCSI storage using IBM Storwize SAN. In Proxmox 6.4 which we use, there is a shared LVM created which stores VMs on all nodes. We didn't notice any performance...
Please consider that I am very new to proxmox.
The concept - I have two servers, one is full of SSD the other is full of HDD. I am running virtual desktops on the SSD server and I store big files on the HDD server.
I would like a solution which would fit the best to share my HDD server storage...
I am looking for a SAN storage solution in scale-up configuration if possible in HA for VMs and lxc containers under proxmox.
The ceph solution is very attractive in tandem with proxmox in a scale-out configuration but the hardware I have does not allow me to do it...
I need to replace a switch in one of our datacenters, that is used by a couple of PVE nodes for almost anything (corosync, ZFS over iSCSI, node to node traffic for VMs etc.).
The replacement shouldn't take long, but still it may take a minute or so. For all the traffic except ZFS over...
When I see about adding the support for iSCSI install with the official ISO is actually not planned I searched some ideas to make this possible.
I found a workaround with the official PVE 7.1 ISO and I want to share, maybe it can help someone.
Only the iSCSI workaround will be explained, I...
Hi there !
After two weeks of testing a lot of solutions, I come here to maybe get an answer about my issue.
I use proxmox from v4 and now I have two Synology rs2421+ in HA mode for storage and I want to make my new three hypervisors boot over iSCSI.
MY stuff :
- MB Asrock Rack...
Hello, I was trying to update my installation of proxmox and now I am getting the following when I try to do an apt update && apt dist-upgrade. I have tried to do an apt clean along with apt reinstall proxmox-ve but I keep getting this error and it stops updating and now none of my VMs or...
We're running ProxMox 7.1-5 on Dell R7525 servers (AMD cpu) with (2) Dell Broadcom 57414 Dual Port 25GbE SFP+ nic cards attached.
We've got (2) 25GbE switches cabled correctly and connected to our ISCSI storage appliance.
One card is in Riser 1 and the other is in Riser 3; I'll...