This is virtual disk on iscsi that was moved from one Proxmox system to another. /dev/sdc doesn't have the "a" attribute so I'm unable to use the disk. This disk also has 9 out of the 11 TB used.
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 931.07g 15.79g
Figured it out.
/dev/mapper/mpath0 was still trying to connect to ISCSI which was hanging everything. Once removed (multipath -F) it started working.
This was from a previous ISCSI SAN connection, kind of surprised that something like that hung the entire thing.
dmsetup info helped find the...
All local storage. Fdisk -l hangs, pvs hangs so I'm guessing there's an issue with the local storage but I'm not certain where to start. This is on Proxmox 5. on two different servers. Also unable to login via the gui. qm start <vmid> fails also.
qm start 1043
WARNING: Not using...
I'm receiving this error "Error Permission denied - invalid csrf token (401)" when accessing a VM console on a single Proxmox server using VPN. I can access this server if I'm not using SSL VPN (at home on the same network).
I still had a connection after the reboot, I just wasn't getting the speed I should.
We have tested using iperf between the two x520's and we are getting speeds about 564Kbits/sec so we know there is a problem.
Moving both nics to Windows does solve the problem, so I believe the problem is...
We currently have a Enhance Tech es3160 SAN with 10Gb controllers.
We have connected this SAN to a VMware box and used a Windows and Linux client and achieved speeds of 600-800MB/s.
We then take that same hardware configuration and install Proxmox and can only achieve about 200MB/s.
After I made the post I did see the option for allocating the storage to certain nodes. Thanks for confirming this.
I'm going with 3 nodes. The 3rd node will be a server but not as robust as the other 2 main nodes.
We currently have 1.9 with 2 nodes connected to a SAN using 10gb fibre.
If we add a 3rd node in 2.0 for HA we will not be able to connect that SAN to the 3rd node because of cost constraints.
Is it possible to not have the 3rd node connected to the storage or should we just use the two node...
I have 4 1GB ports multipath'd using a Enhance Tech SAN.
Hard drives - RAID 10 with 6 drives, SAS 7200RPM 1TB
I'm getting the below speeds -
ProxCluster00:/mnt/test# dd if=/dev/zero of=/mnt/test/13GBfile bs=128k count=100K conv=fdatasync
102400+0 records in
102400+0 records out...
Re: Proxmox + DRBD = DRBD module version: 8.3.8 userland version: 8.3.7 you should u
I do not get any errors besides the above. I have had random DRBD disconnections which result in split-brain.
I was thinking that the tools and module not being the same version might have something to do...
I'm receiving the below message when running drbdadm status or drbdadm show-gi r0
How do I downgrade the DRBD module or upgrade the userland version ?
DRBD module version: 8.3.8
userland version: 8.3.7
you should upgrade your drbd tools!