Where are OpenVZ container images saved? also slow speeds

ejc317

Member
Oct 18, 2012
263
0
16
I see KVM images on my iSCSI LUN but not the container images?

Also I am getting abysmally slow speeds on a 10 x Samsung 830 SSD cluster on RAID 10 ... like 10mbps - on another server direct, I am getting 2GBPS ... what is the problem here?
 
I see KVM images on my iSCSI LUN but not the container images?

Also I am getting abysmally slow speeds on a 10 x Samsung 830 SSD cluster on RAID 10 ... like 10mbps - on another server direct, I am getting 2GBPS ... what is the problem here?
Hi,
OpenVZ are container - there is no image! The data are inside an filesystem - normal /var/lib/vz/private/VMID or below your storage ./private/VMID.

About your slow speed: gives more facts (how connected, which other IOs active, how meassured...)

Udo
 
Right sorry I knew that but my question is where is it saved and you answered it :D. However, if I am pulling templates from one NFS store but I want it loaded on another, that's possible right?

As for slow speeds, here's the description

The current setup is as such


4 Independent nodes with 4 ports each. 2 of these ports are on switch 1 and 2 of these ports are on switch 2. Currently, each “set of ports” on each switch is 1 public and 1 private per node.

Our SAN has 16 network ports 8 of these are plugged into switch 1 and 8 of these are into switch 2.

The way the switches are configured is rather basic.


  • 2 independent uplinks from the upstream host into each switch with a next-hop gateway
  • VLAN 2 for all public network traffic
  • VLAN 3 for all private network traffic (no gateway)
  • There is a 4 port (4GBPS) trunk (brocade speak for CISCI etherchannel) between VLAN 3 on switch #1 an VLAN 3 on switch #2 (effectively, the switches are stacked via this trunk)
  • Currently removed any tagged ports (brocade speak for CISCO trunks) between VLAN 2 and VLAN 3
  • Spanning-tree is on

There is no bonding at the moment and we tested a very simple KVM VM on 1 of the node with HA.

We tested DD with a 8kb write size and 10k count and got back 10mbps! on the SAN itself we got over 2.5gbps. On a standalone SATA server, we got 500mbps +
We test HDParm similarly across setups and got a low number

Networking Options considered


  • Bond private nics on the proxmox nodes (mind you they are on 2 separate switches …)
  • Bond public nics on the proxmox nodes (again, separate switches and place them in active / passive failover mode)

What is the optimal setup given our hardware?
 
Right sorry I knew that but my question is where is it saved and you answered it :D. However, if I am pulling templates from one NFS store but I want it loaded on another, that's possible right?
Hi,
the templates are below the storages, where you have selectet also "Template" as content (storage need to be a filesystem), inside the directory ./template/cache/ (gzip-commpressed tar-files).
As for slow speeds, here's the description

The current setup is as such


4 Independent nodes with 4 ports each. 2 of these ports are on switch 1 and 2 of these ports are on switch 2. Currently, each “set of ports” on each switch is 1 public and 1 private per node.

Our SAN has 16 network ports 8 of these are plugged into switch 1 and 8 of these are into switch 2.

The way the switches are configured is rather basic.


  • 2 independent uplinks from the upstream host into each switch with a next-hop gateway
  • VLAN 2 for all public network traffic
  • VLAN 3 for all private network traffic (no gateway)
  • There is a 4 port (4GBPS) trunk (brocade speak for CISCI etherchannel) between VLAN 3 on switch #1 an VLAN 3 on switch #2 (effectively, the switches are stacked via this trunk)
  • Currently removed any tagged ports (brocade speak for CISCO trunks) between VLAN 2 and VLAN 3
  • Spanning-tree is on

There is no bonding at the moment and we tested a very simple KVM VM on 1 of the node with HA.

We tested DD with a 8kb write size and 10k count and got back 10mbps! on the SAN itself we got over 2.5gbps. On a standalone SATA server, we got 500mbps +
We test HDParm similarly across setups and got a low number

Networking Options considered


  • Bond private nics on the proxmox nodes (mind you they are on 2 separate switches …)
  • Bond public nics on the proxmox nodes (again, separate switches and place them in active / passive failover mode)

What is the optimal setup given our hardware?
Can you test first without bondig if the speed is ok? Are all network-access such slow, or only to the storage-box? test with iperf that you get the expected network speed.
If all runs well, try to enable bonding again.

Udo
 
Network access is perfectly fine, I can WGET from external HTTP at ~10-20mbps and internap at up to 6-700.

The issue that we're having with bonding is if we bond with Balance-RR we get packet loss (since the ports are not trunked on the switch side ... they're on2 different switches) - should we move the switch ports? The reason they're separated is for redundancy.

I guess we should have 2 pairs of 2 ports on 2 switches for redundancy as well as increased throughput?

The SAN has 4 x 4 bundles of trunked ports so I know its not an issue going into the SAN
 
For reference

From the KVM VM

/dev/sda1:
Timing cached reads: 11786 MB in 2.00 seconds = 5902.58 MB/sec
Timing buffered disk reads: 36 MB in 3.04 seconds = 11.85 MB/sec

From the Node Native

/dev/sda1:
Timing cached reads: 11442 MB in 2.00 seconds = 5726.97 MB/sec
Timing buffered disk reads: 510 MB in 2.56 seconds = 199.13 MB/sec

From the SAN Native

/dev/sda1:
Timing cached reads:
14212 MB in 2.00 seconds = 7113.64 MB/sec
Timing buffered disk reads: 4098 MB in 3.00 seconds = 1365.89 MB/sec

When I write a brand new file in the KVM VM, I get 400-500mbps. But if I rewrite over that file it goes down to 10mbps - I assume that's the caching
 
When I write a brand new file in the KVM VM, I get 400-500mbps. But if I rewrite over that file it goes down to 10mbps - I assume that's the caching
Have you tried changing the cache parameters in proxmox for the disk - writethrough, direct sync and writeback.

PS. (in the VM) What filesystem do you use? What mount options do you use?
 
OpenVZ containers are all EXT3 - the KVM, I have to check.

Side note, this may sound stupid but in Proxmox, say I add an iSCSI SAN with an IP of 10.10.1.1 - but that server has say 10 different NICs on it with other IPs - will it ONLY try to get SAN data from that one IP? I can see that being a bottleneck

Conversely, will each of the nics on the proxmox node act as an initiator? or will it only be the node IP? (Ie we have 4 nics per node and want to take advantage of all of it) - can we only do this via bonding?
 
Thanks Dietmar, I've turned on multipathing already - will test it.

I think we will do round robin bond of the public nics and the private ones we will multipath to the switch. On SAN side, we will bond 4 ports for NFS (or do you think we will need to bond on the server side for NFS too?) and the remainder we will leave unbonded
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!