Hi all,
I'm new on Proxmox VE and on this forum too. I wish I could get help with my problem here.
I build a lab with a R730 server with the latest Proxmox VE 8.14 (Kernel 6.5.11-8-pve) as hypervisor
This one has 2 nics
- one integrated 2P X520/2P I350
- one additional XL710-QDA2
I would to use the 40 GbE network to reach my storage server but it didn't work
It's a R730XD with solaris and use nfs share to access the data, this server have the same nics
All works fine with the use of the 10 GbE network
Dell firmware is up to date on each servers
On iDRAC, the nic is shown as a Cisco nic and not an intel nic (but it's the same chip)
I use 2 cisco N9k 40GbE swicth to connect the proxmox server and the storage server, bonding is used on both server with 2x40GbE active LACP
mounting the NFS storage is ok on PVE, on this state all is ok
For the moment any VM use this datastore
When I launch my test that is trying to import a virtual disk on the root of the storage server to a VM on the same storage server
And I always stuck on this line, if i do it with 10GbE network and adpters, import is done
After few minutes to be stuck on the qm importdisk command, this appears on the PVE console, syslog and dmesg
but of course, the storage server is always available from network adn responding to ping from the switch
Proxmox server is not responding to ping from the switch after this events
All links are up and connected on all the equipments, LACP is ok too
In addition few more minutes later, on the admin interface, all items get the grey ? icon
So it seems that the use of this nic makes proxmox bug
After reboot, all is ok, and this happends again if i try to lauch my test qm importdisk command
Is someone have an idea or already show this issue.
I can make other tests or send any log that can help if you ask it
thanks by advance
I'm new on Proxmox VE and on this forum too. I wish I could get help with my problem here.
I build a lab with a R730 server with the latest Proxmox VE 8.14 (Kernel 6.5.11-8-pve) as hypervisor
This one has 2 nics
- one integrated 2P X520/2P I350
- one additional XL710-QDA2
I would to use the 40 GbE network to reach my storage server but it didn't work
It's a R730XD with solaris and use nfs share to access the data, this server have the same nics
All works fine with the use of the 10 GbE network
Dell firmware is up to date on each servers
On iDRAC, the nic is shown as a Cisco nic and not an intel nic (but it's the same chip)
I use 2 cisco N9k 40GbE swicth to connect the proxmox server and the storage server, bonding is used on both server with 2x40GbE active LACP
mounting the NFS storage is ok on PVE, on this state all is ok
For the moment any VM use this datastore
When I launch my test that is trying to import a virtual disk on the root of the storage server to a VM on the same storage server
Code:
root@pve-01:~# cd /mnt/pve/ZFS-01-40G/
root@pve-01:/mnt/pve/ZFS-01-40G# qm importdisk 130103 vm-130102-disk-2.qcow2 ZFS-01-40G --format qcow2
importing disk 'vm-130102-disk-2.qcow2' to VM 130103 ...
After few minutes to be stuck on the qm importdisk command, this appears on the PVE console, syslog and dmesg
Code:
[ 500.881270] nfs: server 10.90.100.101 not responding, still trying
[ 510.865224] nfs: server 10.90.100.101 not responding, still trying
[ 689.313710] nfs: server 10.90.100.101 OK
[ 689.313725] nfs: server 10.90.100.101 OK
[ 869.517915] nfs: server 10.90.100.101 not responding, still trying
[ 869.517963] nfs: server 10.90.100.101 not responding, still trying
[ 1057.961684] nfs: server 10.90.100.101 OK
[ 1057.980952] nfs: server 10.90.100.101 OK
[ 1238.154492] nfs: server 10.90.100.101 not responding, still trying
[ 1238.154543] nfs: server 10.90.100.101 not responding, still trying
[ 1426.598142] nfs: server 10.90.100.101 OK
[ 1426.608204] nfs: server 10.90.100.101 OK
[ 1606.791077] nfs: server 10.90.100.101 not responding, still trying
[ 1606.791099] nfs: server 10.90.100.101 not responding, still trying
[ 1795.245338] nfs: server 10.90.100.101 OK
[ 1795.245371] nfs: server 10.90.100.101 OK
[ 1975.427682] nfs: server 10.90.100.101 not responding, still trying
[ 1975.427733] nfs: server 10.90.100.101 not responding, still trying
[ 2163.872707] nfs: server 10.90.100.101 OK
[ 2163.892665] nfs: server 10.90.100.101 OK
Proxmox server is not responding to ping from the switch after this events
All links are up and connected on all the equipments, LACP is ok too
In addition few more minutes later, on the admin interface, all items get the grey ? icon
So it seems that the use of this nic makes proxmox bug
After reboot, all is ok, and this happends again if i try to lauch my test qm importdisk command
Is someone have an idea or already show this issue.
I can make other tests or send any log that can help if you ask it
thanks by advance