Hi,I see KVM images on my iSCSI LUN but not the container images?
Also I am getting abysmally slow speeds on a 10 x Samsung 830 SSD cluster on RAID 10 ... like 10mbps - on another server direct, I am getting 2GBPS ... what is the problem here?
Hi,Right sorry I knew that but my question is where is it saved and you answered it. However, if I am pulling templates from one NFS store but I want it loaded on another, that's possible right?
Can you test first without bondig if the speed is ok? Are all network-access such slow, or only to the storage-box? test with iperf that you get the expected network speed.As for slow speeds, here's the description
The current setup is as such
4 Independent nodes with 4 ports each. 2 of these ports are on switch 1 and 2 of these ports are on switch 2. Currently, each “set of ports” on each switch is 1 public and 1 private per node.
Our SAN has 16 network ports 8 of these are plugged into switch 1 and 8 of these are into switch 2.
The way the switches are configured is rather basic.
- 2 independent uplinks from the upstream host into each switch with a next-hop gateway
- VLAN 2 for all public network traffic
- VLAN 3 for all private network traffic (no gateway)
- There is a 4 port (4GBPS) trunk (brocade speak for CISCI etherchannel) between VLAN 3 on switch #1 an VLAN 3 on switch #2 (effectively, the switches are stacked via this trunk)
- Currently removed any tagged ports (brocade speak for CISCO trunks) between VLAN 2 and VLAN 3
- Spanning-tree is on
There is no bonding at the moment and we tested a very simple KVM VM on 1 of the node with HA.
We tested DD with a 8kb write size and 10k count and got back 10mbps! on the SAN itself we got over 2.5gbps. On a standalone SATA server, we got 500mbps +
We test HDParm similarly across setups and got a low number
Networking Options considered
- Bond private nics on the proxmox nodes (mind you they are on 2 separate switches …)
- Bond public nics on the proxmox nodes (again, separate switches and place them in active / passive failover mode)
What is the optimal setup given our hardware?
Have you tried changing the cache parameters in proxmox for the disk - writethrough, direct sync and writeback.When I write a brand new file in the KVM VM, I get 400-500mbps. But if I rewrite over that file it goes down to 10mbps - I assume that's the caching
Conversely, will each of the nics on the proxmox node act as an initiator? or will it only be the node IP? (Ie we have 4 nics per node and want to take advantage of all of it) - can we only do this via bonding?
We use essential cookies to make this site work, and optional cookies to enhance your experience.