Node offline while migrating CT's

300cpilot

Well-Known Member
Mar 24, 2019
108
5
58
This has happened twice in 24 hours. Was trying to migrate CT's to second node and the first 2 went ok, but 3rd tried and then this happened. I cancelled the migrate and here I am.

Last night I got it going again after I forced a reboot. It took several hours to get everything up again.

see attached picture.

What does everyone think is causing this? Started yesterday?
 

Attachments

  • 1Capture.JPG
    1Capture.JPG
    42.1 KB · Views: 5
Last edited:
I am able to pct stop vmid.

Then move the vm to the node that has the replicated copy of the vm with:
mv /etc/pve/nodes/prox-002/lxc/313.conf /etc/pve/nodes/prox-001/lxc/313.conf
 
This has happened twice in 24 hours. Was trying to migrate CT's to second node and the first 2 went ok, but 3rd tried and then this happened. I cancelled the migrate and here I am.

Last night I got it going again after I forced a reboot. It took several hours to get everything up again.

see attached picture.

What does everyone think is causing this? Started yesterday?

I assume you send all migration traffic to your cluster network.?

Please check the guides to separate your networks.
 
Pls add your:

> /etc/pve/datacenter.cfg

and your:

> /etc/network/interfaces
 
I have tried changing the gateway as well from one nic to the other, this shows the gateway on the 10 gig, but this broke my internet on my CT's. Also now when trying to migrate a CT/VM to my 3rd node I get times where it reports no traffic is being sent/received, see bottom. I have also found that my node that keeps doing this lost a drive at some point and the raid is slower due to this. This box is configured as a raid 6 with a hardware controller and then as a zfs single within Proxmox. I tried to rebuild the raid, but it failed to rebuild. I am now rebuilding the box. Luckly I had all the vm/ct's replicated to the other 2 nodes and could fire them up. Also below it is calling out my nic enp5s0 as a one gig or am I reading it wrong? Added the port info from the cisco switch, the port is trained up as ten gig.

When I rebuild the box I will have to leave it as a raid 6 w/6 drives, is a zfs the right way to go? I can not change the controller because of the backplane, so a jbod is not possible. I can make 6 raid 0's though?
Thoughts?

keyboard: en-us
# use dedicated migration network
migration: secure,network=10.90.1.0/24

network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.stp off
#face eno3 inet manual
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so. vmbr2 inet static
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
#10 gig mtu 1280
auto lo bridge-ports enp5s0
iface lo inet loopback
iface vmbr0 inet static
iface eno1 inet manual.1.214s
bridge-vids 2-4094
#10 Gig Bridge
address 10.80.1.214
netmask 255.255.255.0
bridge-ports eno1
bridge-stp off
bridge-fd 0

auto vmbr2
iface vmbr2 inet static
address 10.90.1.2
netmask 255.255.255.0
gateway 10.90.1.1
mtu 1280
bridge-ports enp5s0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
#10 Gig Bridge


ip a:
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether d4:be:d9:af:c6:75 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:be:d9:af:c6:77 brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:be:d9:af:c6:79 brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:be:d9:af:c6:7b brd ff:ff:ff:ff:ff:ff
6: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1280 qdisc mq master vmbr2 state UP group default qlen 1000
link/ether 00:02:c9:4e:fc:0c brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d4:be:d9:af:c6:75 brd ff:ff:ff:ff:ff:ff
inet 10.80.1.214/24 brd 10.80.1.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::d6be:d9ff:feaf:c675/64 scope link
valid_lft forever preferred_lft forever
8: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1280 qdisc noqueue state UP group default qlen 1000
link/ether 00:02:c9:4e:fc:0c brd ff:ff:ff:ff:ff:ff
inet 10.90.1.2/24 brd 10.90.1.255 scope global vmbr2
valid_lft forever preferred_lft forever
inet6 fe80::202:c9ff:fe4e:fc0c/64 scope link
valid_lft forever preferred_lft forever


Migrating a template:

2019-04-13 05:26:55 use dedicated network address for sending migration traffic (10.80.1.217)
2019-04-13 05:26:55 starting migration of CT 1000 to node 'rocket' (10.80.1.217)
2019-04-13 05:26:55 found local volume 'local-zfs:basevol-1000-disk-0' (in current VM config)
full send of rpool/data/basevol-1000-disk-0@__base__ estimated size is 19.0G
send from @__base__ to rpool/data/basevol-1000-disk-0@__migration__ estimated size is 2.61K
total estimated size is 19.0G
TIME SENT SNAPSHOT
05:26:57 2.96M rpool/data/basevol-1000-disk-0@__base__
05:26:58 95.1M rpool/data/basevol-1000-disk-0@__base__
05:26:59 206M rpool/data/basevol-1000-disk-0@__base__
05:27:00 317M rpool/data/basevol-1000-disk-0@__base__
05:27:01 428M rpool/data/basevol-1000-disk-0@__base__
05:27:02 534M rpool/data/basevol-1000-disk-0@__base__
05:27:03 645M rpool/data/basevol-1000-disk-0@__base__
05:27:04 756M rpool/data/basevol-1000-disk-0@__base__
05:27:05 867M rpool/data/basevol-1000-disk-0@__base__
05:27:06 970M rpool/data/basevol-1000-disk-0@__base__
05:27:07 1.05G rpool/data/basevol-1000-disk-0@__base__
05:27:08 1.16G rpool/data/basevol-1000-disk-0@__base__
05:27:09 1.27G rpool/data/basevol-1000-disk-0@__base__
05:27:10 1.38G rpool/data/basevol-1000-disk-0@__base__
05:27:11 1.48G rpool/data/basevol-1000-disk-0@__base__
05:27:12 1.59G rpool/data/basevol-1000-disk-0@__base__
05:27:13 1.70G rpool/data/basevol-1000-disk-0@__base__
05:27:14 1.81G rpool/data/basevol-1000-disk-0@__base__
05:27:15 1.92G rpool/data/basevol-1000-disk-0@__base__
05:27:16 2.01G rpool/data/basevol-1000-disk-0@__base__
05:27:17 2.01G rpool/data/basevol-1000-disk-0@__base__
05:27:18 2.01G rpool/data/basevol-1000-disk-0@__base__
05:27:19 2.04G rpool/data/basevol-1000-disk-0@__base__
05:27:20 2.15G rpool/data/basevol-1000-disk-0@__base__
05:27:21 2.26G rpool/data/basevol-1000-disk-0@__base__
05:27:22 2.37G rpool/data/basevol-1000-disk-0@__base__
05:27:23 2.48G rpool/data/basevol-1000-disk-0@__base__
05:27:24 2.59G rpool/data/basevol-1000-disk-0@__base__
05:27:25 2.69G rpool/data/basevol-1000-disk-0@__base__
05:27:26 2.80G rpool/data/basevol-1000-disk-0@__base__
05:27:27 2.91G rpool/data/basevol-1000-disk-0@__base__
05:27:28 3.02G rpool/data/basevol-1000-disk-0@__base__
05:27:29 3.13G rpool/data/basevol-1000-disk-0@__base__
05:27:30 3.24G rpool/data/basevol-1000-disk-0@__base__
05:27:31 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:32 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:33 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:34 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:35 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:36 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:37 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:38 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:39 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:40 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:41 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:42 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:43 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:44 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:45 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:46 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:47 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:48 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:49 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:50 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:51 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:52 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:53 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:54 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:55 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:56 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:57 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:58 3.25G rpool/data/basevol-1000-disk-0@__base__
05:27:59 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:00 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:01 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:02 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:03 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:04 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:05 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:06 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:07 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:08 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:09 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:10 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:11 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:12 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:13 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:14 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:15 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:16 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:17 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:18 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:19 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:20 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:21 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:22 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:23 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:24 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:25 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:26 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:27 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:28 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:29 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:30 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:31 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:32 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:33 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:34 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:35 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:36 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:37 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:38 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:39 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:40 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:41 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:42 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:43 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:44 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:45 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:46 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:47 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:48 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:49 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:50 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:51 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:52 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:53 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:54 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:55 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:56 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:57 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:58 3.25G rpool/data/basevol-1000-disk-0@__base__
05:28:59 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:00 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:01 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:02 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:03 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:04 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:05 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:06 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:07 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:08 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:09 3.25G rpool/data/basevol-1000-disk-0@__base__
05:29:10 3.29G rpool/data/basevol-1000-disk-0@__base__
05:29:11 3.40G rpool/data/basevol-1000-disk-0@__base__
05:29:12 3.51G rpool/data/basevol-1000-disk-0@__base__
05:29:13 3.62G rpool/data/basevol-1000-disk-0@__base__
05:29:14 3.72G rpool/data/basevol-1000-disk-0@__base__
05:29:15 3.83G rpool/data/basevol-1000-disk-0@__base__
05:29:16 3.94G rpool/data/basevol-1000-disk-0@__base__
05:29:17 4.05G rpool/data/basevol-1000-disk-0@__base__
05:29:18 4.15G rpool/data/basevol-1000-disk-0@__base__
05:29:19 4.26G rpool/data/basevol-1000-disk-0@__base__
05:29:20 4.37G rpool/data/basevol-1000-disk-0@__base__
05:29:21 4.48G rpool/data/basevol-1000-disk-0@__base__
05:29:22 4.49G rpool/data/basevol-1000-disk-0@__base__
05:29:23 4.49G rpool/data/basevol-1000-disk-0@__base__
05:29:24 4.49G rpool/data/basevol-1000-disk-0@__base__
05:29:25 4.49G rpool/data/basevol-1000-disk-0@__base__
05:29:26 4.58G rpool/data/basevol-1000-disk-0@__base__
05:29:27 4.69G rpool/data/basevol-1000-disk-0@__base__
05:29:28 4.80G rpool/data/basevol-1000-disk-0@__base__
05:29:29 4.91G rpool/data/basevol-1000-disk-0@__base__
05:29:30 5.02G rpool/data/basevol-1000-disk-0@__base__
05:29:31 5.12G rpool/data/basevol-1000-disk-0@__base__
05:29:32 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:33 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:34 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:35 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:36 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:37 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:38 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:39 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:40 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:41 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:42 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:43 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:44 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:45 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:46 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:47 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:48 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:49 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:50 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:51 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:52 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:53 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:54 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:55 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:56 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:57 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:58 5.18G rpool/data/basevol-1000-disk-0@__base__
05:29:59 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:00 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:01 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:02 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:03 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:04 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:05 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:06 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:07 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:08 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:09 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:10 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:11 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:12 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:13 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:14 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:15 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:16 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:17 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:18 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:19 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:20 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:21 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:22 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:23 5.18G rpool/data/basevol-1000-disk-0@__base__
05:30:24 5.22G rpool/data/basevol-1000-disk-0@__base__
05:30:25 5.32G rpool/data/basevol-1000-disk-0@__base__
05:30:26 5.43G rpool/data/basevol-1000-disk-0@__base__
05:30:27 5.54G rpool/data/basevol-1000-disk-0@__base__
05:30:28 5.65G rpool/data/basevol-1000-disk-0@__base__
05:30:29 5.74G rpool/data/basevol-1000-disk-0@__base__
05:30:30 5.85G rpool/data/basevol-1000-disk-0@__base__
05:30:31 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:32 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:33 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:34 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:35 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:36 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:37 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:38 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:39 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:40 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:41 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:42 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:43 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:44 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:45 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:46 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:47 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:48 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:49 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:50 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:51 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:52 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:53 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:54 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:55 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:56 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:57 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:58 5.94G rpool/data/basevol-1000-disk-0@__base__
05:30:59 5.94G rpool/data/basevol-1000-disk-0@__base__
05:31:00 5.94G rpool/data/basevol-1000-disk-0@__base__
05:31:01 5.94G rpool/data/basevol-1000-disk-0@__base__
05:31:02 5.94G rpool/data/basevol-1000-disk-0@__base__
05:31:03 5.94G rpool/data/basevol-1000-disk-0@__base__
05:31:04 5.94G rpool/data/basevol-1000-disk-0@__base__
05:31:05 5.94G rpool/data/basevol-1000-disk-0@__base__
05:31:06 5.94G rpool/data/basevol-1000-disk-0@__base__
05:31:07 6.05G rpool/data/basevol-1000-disk-0@__base__
05:31:08 6.16G rpool/data/basevol-1000-disk-0@__base__
05:31:09 6.27G rpool/data/basevol-1000-disk-0@__base__
05:31:10 6.37G rpool/data/basevol-1000-disk-0@__base__
05:31:11 6.48G rpool/data/basevol-1000-disk-0@__base__
05:31:12 6.59G rpool/data/basevol-1000-disk-0@__base__
05:31:13 6.70G rpool/data/basevol-1000-disk-0@__base__
05:31:14 6.81G rpool/data/basevol-1000-disk-0@__base__
05:31:15 6.92G rpool/data/basevol-1000-disk-0@__base__
05:31:16 7.02G rpool/data/basevol-1000-disk-0@__base__
05:31:17 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:18 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:19 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:20 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:21 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:22 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:23 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:24 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:25 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:26 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:27 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:28 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:29 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:30 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:31 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:32 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:33 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:34 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:35 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:36 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:37 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:38 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:39 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:40 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:41 7.04G rpool/data/basevol-1000-disk-0@__base__
05:31:42 7.10G rpool/data/basevol-1000-disk-0@__base__
05:31:43 7.21G rpool/data/basevol-1000-disk-0@__base__
05:31:44 7.32G rpool/data/basevol-1000-disk-0@__base__
05:31:45 7.43G rpool/data/basevol-1000-disk-0@__base__
05:31:46 7.54G rpool/data/basevol-1000-disk-0@__base__
05:31:47 7.65G rpool/data/basevol-1000-disk-0@__base__
05:31:48 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:49 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:50 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:51 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:52 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:53 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:54 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:55 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:56 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:57 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:58 7.68G rpool/data/basevol-1000-disk-0@__base__
05:31:59 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:00 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:01 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:02 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:03 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:04 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:05 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:06 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:07 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:08 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:09 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:10 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:11 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:12 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:13 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:14 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:15 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:16 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:17 7.68G rpool/data/basevol-1000-disk-0@__base__
05:32:18 7.68G rpool/data/basevol-1000-disk-0@__base__


Switch port it is plugged into:
LANSW-02#sh int te1/49
TenGigabitEthernet1/49 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet Port, address is 2c54.2dbd.4f70 (bia 2c54.2dbd.4f70)
Description: ProxMox Server04
MTU 1500 bytes, BW 10000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 10Gb/s, link type is auto, media type is 10GBase-CU 3M
input flow-control is on, output flow-control is off
ARP type: ARPA, ARP Timeout 04:00:00
Last input 5w5d, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 90000 bits/sec, 15 packets/sec
17241023066 packets input, 23625220653510 bytes, 0 no buffer
Received 859429 broadcasts (1841 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 input packets with dribble condition detected
10596969735 packets output, 13851066473575 bytes, 0 underruns
0 output errors, 0 collisions, 3 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out
LANSW-02#
 
Figured I would try, rebuilt the node. Created 6 raid 0's in the raid card. Then loaded on a zfs-3 across all 6 drives.
I can now migrate 10gb per min & 26 seconds.

Also took this time to clean out my known hosts files and keys. This was because when you delete a node they do not get cleaned out and then you have issues connecting to the host you readd on the same ip's. I did change the name of the host this time.

This is a test environment, to have 4 nodes and a shared storage. Currently 110 vm's running. If this works out then we will load this on production. After 60 days or so.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!