Search results

  1. L

    Garbage Collecor - TASK ERROR: unexpected error on datastore traversal: Not a directory (os error 20)

    Hi there, for some days the Garbage Collector operation has failed with the following error: TASK ERROR: unexpected error on datastore traversal: Not a directory (os error 20) Backups work fine. How can I fix the garbage collector error? Package version: proxmox-backup: 2.1-1 (running...
  2. L

    Mellanox ConnectX-6 Dx - full mesh - slow into VM

    I try to set mtu9000 into the VM (Ubuntu 10.04): ens19: mtu: 9000 addresses: - 10.15.15.24/24 restart the VM and test but nothing change. Hardware NUMA settings is enabled: # numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15...
  3. L

    Mellanox ConnectX-6 Dx - full mesh - slow into VM

    I set the MTU 9000 value in the Open vSwitch configuration on the host nodes, how is it configured in the VM's vNics? I have enabled NUMA in the VM configuration, how can I enable it in the hardware?
  4. L

    Mellanox ConnectX-6 Dx - full mesh - slow into VM

    Update: I set the multiqueue to 16 from the configuraizone file but nothing has changed in terms of performance I set the vCPU to Icelake-Server-noTSX but also in this case the network performances have not changed In conclusion, I leave the multiqueue at 8 and I am satisfied with 30Gbps :)
  5. L

    Mellanox ConnectX-6 Dx - full mesh - slow into VM

    Yes! Thanks a lot spirit. Multiqueue in nic make the difference. Without multiqueue: # iperf -e -c 10.15.15.102 -P 4 -p 9999 ------------------------------------------------------------ Client connecting to 10.15.15.102, TCP port 9999 with pid 551007 Write buffer size: 128 KByte TCP window...
  6. L

    Mellanox ConnectX-6 Dx - full mesh - slow into VM

    Hi there I have 3 Proxmox nodes Supermicro SYS-120C-TN10R connected via Mellanox 100GbE ConnectX-6 Dx cards in cross-connect mode using MCP1600-C00AE30N DAC Cable Ethernet 100GbE QSFP28 0.5m # lspci -vv -s 98:00.0 98:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6...
  7. L

    Multipath not work after Proxmox 7 upgrade

    I haven't found a solution yet, but in a few weeks I'll have the Intel Modular server available and I'll try to get multipath working.
  8. L

    Multipath not work after Proxmox 7 upgrade

    multipath -v3 show: Jul 20 15:39:12 | set open fds limit to 1048576/1048576 Jul 20 15:39:12 | loading //lib/multipath/libchecktur.so checker Jul 20 15:39:12 | checker tur: message table size = 3 Jul 20 15:39:12 | loading //lib/multipath/libprioconst.so prioritizer Jul 20 15:39:12 |...
  9. L

    Multipath not work after Proxmox 7 upgrade

    Hi, I upgraded from version 6.4 to 7.0 of one of the blades on my Intel Modular Server After reboot multipath not show any device: root@proxmox106:~# multipath -ll root@proxmox106:~# on another node with proxmox 6.4 I have: root@proxmox105:~# multipath -ll sistema (222be000155bb7f72) dm-0...
  10. L

    lvremove -> Logical volume ..... is used by another device.

    Hi, I resolve with this: http://blog.roberthallam.org/2017/12/solved-logical-volume-is-used-by-another-device/comment-page-1/
  11. L

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Hello, in my case it seems that the problem with corosync has been solved with the last update: # dpkg -l | grep knet ii libknet1:amd64 1.12-pve1 amd64 kronosnet core switching implementation before this update corosync reported continuously...
  12. L

    lvremove -> Logical volume ..... is used by another device.

    Hi, I try to remove unused lvm: # lvremove -f /dev/volssd-vg/vm-520-disk-2 Logical volume volssd-vg/vm-520-disk-2 is used by another device. same result with the command: # lvchange -a n /dev/volssd-vg/vm-520-disk-2 Logical volume volssd-vg/vm-520-disk-2 is used by another device. I try to...
  13. L

    Node red after enable firewall in proxmox 4.2

    Hello, I created a 4-node cluster that worked perfectly until I enabled the firewall on the cluster and the VM. Now the problem is that every minute nodes turn red, the syslog reports this: Aug 29 18:36:23 proxmox106 corosync[30192]: [TOTEM ] FAILED TO RECEIVE Aug 29 18:36:26 proxmox106...
  14. L

    NFS storage is not online (500)

    Hi, I also have a problem with an NFS share. I have a cluster of two nodes to which is attached a NAS with NFS share and has always been running smoothly; last week I added a blade, the same version of Proxmox (3.4-11) and also configured the NFS storage. The problem is that the new blade does...
  15. L

    Ceph remain in HEALTH_WARN after osd remove

    Update: I change the disk and recreate the osd. After rebluid the system is now in HEALTH_OK. Thanks a lot. Lorenzo
  16. L

    Ceph remain in HEALTH_WARN after osd remove

    I have a cluster of 3 servers with cepth storage over 9 disks (3 each server). One osd is going down/out and so I "remove" it, after that system start to rebalance data over the remainig osd but after some hours rebalance is stopping with 1 page stuck unclean: # ceph -s...
  17. L

    Problem installing snl (solidworks network licensing)

    This is the message that appears after the installation of SolidWorks Enterprise PDM:
  18. L

    Problem installing snl (solidworks network licensing)

    Thanks fireon e macday, I have the same problem a and I tried your solution without success: proxmox00:~# dmidecode -t 0 # dmidecode 2.11 SMBIOS 2.6 present. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: Intel Corp. Version: SE5C600.86B.01.03.0002.062020121504...
  19. L

    Restoring to CEPH

    Thank You for reply, I resolve in another way: - add new HDD SATA to one node of the clustrer; - add to that node a new storage "directory" using the new HDD; - restore VM into that storage; - live move the disk to ceph storage; - remove the HDD SATA. Lorenzo
  20. L

    Restoring to CEPH

    Hi, when this feature will be implemented? I have try with release 3.3 but it don't work. Thanks in advance, Lorenzo