Search results

  1. running PVS shows no attribute for allocate ?

    This is virtual disk on iscsi that was moved from one Proxmox system to another. /dev/sdc doesn't have the "a" attribute so I'm unable to use the disk. This disk also has 9 out of the 11 TB used. PV VG Fmt Attr PSize PFree /dev/sda3 pve lvm2 a-- 931.07g 15.79g /dev/sdc...
  2. Unable to start VMs and everything has gray question marks.

    Figured it out. /dev/mapper/mpath0 was still trying to connect to ISCSI which was hanging everything. Once removed (multipath -F) it started working. This was from a previous ISCSI SAN connection, kind of surprised that something like that hung the entire thing. dmsetup info helped find the...
  3. Unable to start VMs and everything has gray question marks.

    All local storage. Fdisk -l hangs, pvs hangs so I'm guessing there's an issue with the local storage but I'm not certain where to start. This is on Proxmox 5. on two different servers. Also unable to login via the gui. qm start <vmid> fails also. example - qm start 1043 WARNING: Not using...
  4. Clustered - Volume Group not activating on one node.

    The first node has the VG and it's working fine. The second node reports the error "Incorrect metadata area header checksum on `/dev/mapper0` at offset 4096" when trying to start a VM on it. Thanks
  5. Proxmox 5 - Two nodes and a SAN, live migration ?

    Is it possible to have live migration with two nodes (I don't have a 3rd to implement) and a SAN (iscsi) ? I would even settle for the ability to manually bring the VMs up on the other node. Thanks
  6. Error Permission denied - invalid csrf token (401)

    I don't know what the issue was but Firefox works while IE doesn't.
  7. Error Permission denied - invalid csrf token (401)

    I'm receiving this error "Error Permission denied - invalid csrf token (401)" when accessing a VM console on a single Proxmox server using VPN. I can access this server if I'm not using SSL VPN (at home on the same network).
  8. iscsi SAN with LACP

    Sounds like you need multipath to utilize all of the 1GB paths. http://pve.proxmox.com/wiki/ISCSI_Multipath
  9. 10Gb performance - SAN

    I still had a connection after the reboot, I just wasn't getting the speed I should. We have tested using iperf between the two x520's and we are getting speeds about 564Kbits/sec so we know there is a problem. Moving both nics to Windows does solve the problem, so I believe the problem is...
  10. 10Gb performance - SAN

    We currently have a Enhance Tech es3160 SAN with 10Gb controllers. We have connected this SAN to a VMware box and used a Windows and Linux client and achieved speeds of 600-800MB/s. We then take that same hardware configuration and install Proxmox and can only achieve about 200MB/s. In both...
  11. Question about two vs. three nodes

    After I made the post I did see the option for allocating the storage to certain nodes. Thanks for confirming this. I'm going with 3 nodes. The 3rd node will be a server but not as robust as the other 2 main nodes.
  12. Question about two vs. three nodes

    We currently have 1.9 with 2 nodes connected to a SAN using 10gb fibre. If we add a 3rd node in 2.0 for HA we will not be able to connect that SAN to the 3rd node because of cost constraints. Is it possible to not have the 3rd node connected to the storage or should we just use the two node...
  13. Is my iSCSI SAN slow ?

    I have 4 1GB ports multipath'd using a Enhance Tech SAN. Hard drives - RAID 10 with 6 drives, SAS 7200RPM 1TB I'm getting the below speeds - ProxCluster00:/mnt/test# dd if=/dev/zero of=/mnt/test/13GBfile bs=128k count=100K conv=fdatasync 102400+0 records in 102400+0 records out...
  14. Update DRBD userland to 8.3.10 to match kernel in 1.9

    I followed the directions and userland 8.3.11 was installed which looks to be the newest. How can I install 8.3.10 ? Thanks
  15. DRBD Module Version and Proxmox 1.9

    How do I upgrade the module version of DRBD ? DRBD module version: 8.3.8 userland version: 8.3.11 pve-manager: 1.9-24 (pve-manager/1.9/6542) running kernel: 2.6.35-2-pve proxmox-ve-2.6.35: 1.8-13 pve-kernel-2.6.32-4-pve: 2.6.32-33 pve-kernel-2.6.35-2-pve: 2.6.35-13 qemu-server: 1.1-32...
  16. Proxmox + DRBD = DRBD module version: 8.3.8 userland version: 8.3.7 you should upgra

    Re: Proxmox + DRBD = DRBD module version: 8.3.8 userland version: 8.3.7 you should u I do not get any errors besides the above. I have had random DRBD disconnections which result in split-brain. I was thinking that the tools and module not being the same version might have something to do...
  17. Proxmox + DRBD = DRBD module version: 8.3.8 userland version: 8.3.7 you should upgra

    Re: Proxmox + DRBD = DRBD module version: 8.3.8 userland version: 8.3.7 you should u pve-manager: 1.8-18 (pve-manager/1.8/6070) running kernel: 2.6.35-1-pve proxmox-ve-2.6.35: 1.8-11 pve-kernel-2.6.32-4-pve: 2.6.32-33 pve-kernel-2.6.35-1-pve: 2.6.35-11 qemu-server: 1.1-30 pve-firmware: 1.0-11...
  18. Proxmox + DRBD = DRBD module version: 8.3.8 userland version: 8.3.7 you should upgra

    I'm receiving the below message when running drbdadm status or drbdadm show-gi r0 How do I downgrade the DRBD module or upgrade the userland version ? DRBD module version: 8.3.8 userland version: 8.3.7 you should upgrade your drbd tools!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!