Hi there,
for some days the Garbage Collector operation has failed with the following error:
TASK ERROR: unexpected error on datastore traversal: Not a directory (os error 20)
Backups work fine.
How can I fix the garbage collector error?
Package version:
proxmox-backup: 2.1-1 (running...
I try to set mtu9000 into the VM (Ubuntu 10.04):
ens19:
mtu: 9000
addresses:
- 10.15.15.24/24
restart the VM and test but nothing change.
Hardware NUMA settings is enabled:
# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15...
I set the MTU 9000 value in the Open vSwitch configuration on the host nodes, how is it configured in the VM's vNics?
I have enabled NUMA in the VM configuration, how can I enable it in the hardware?
Update:
I set the multiqueue to 16 from the configuraizone file but nothing has changed in terms of performance
I set the vCPU to Icelake-Server-noTSX but also in this case the network performances have not changed
In conclusion, I leave the multiqueue at 8 and I am satisfied with 30Gbps :)
Yes! Thanks a lot spirit.
Multiqueue in nic make the difference.
Without multiqueue:
# iperf -e -c 10.15.15.102 -P 4 -p 9999
------------------------------------------------------------
Client connecting to 10.15.15.102, TCP port 9999 with pid 551007
Write buffer size: 128 KByte
TCP window...
Hi there
I have 3 Proxmox nodes Supermicro SYS-120C-TN10R connected via Mellanox 100GbE ConnectX-6 Dx cards in cross-connect mode using MCP1600-C00AE30N DAC Cable Ethernet 100GbE QSFP28 0.5m
# lspci -vv -s 98:00.0
98:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6...
Hi,
I upgraded from version 6.4 to 7.0 of one of the blades on my Intel Modular Server
After reboot multipath not show any device:
root@proxmox106:~# multipath -ll
root@proxmox106:~#
on another node with proxmox 6.4 I have:
root@proxmox105:~# multipath -ll
sistema (222be000155bb7f72) dm-0...
Hello,
in my case it seems that the problem with corosync has been solved with the last update:
# dpkg -l | grep knet
ii libknet1:amd64 1.12-pve1 amd64 kronosnet core switching implementation
before this update corosync reported continuously...
Hi,
I try to remove unused lvm:
# lvremove -f /dev/volssd-vg/vm-520-disk-2
Logical volume volssd-vg/vm-520-disk-2 is used by another device.
same result with the command:
# lvchange -a n /dev/volssd-vg/vm-520-disk-2
Logical volume volssd-vg/vm-520-disk-2 is used by another device.
I try to...
Hello,
I created a 4-node cluster that worked perfectly until I enabled the firewall on the cluster and the VM.
Now the problem is that every minute nodes turn red, the syslog reports this:
Aug 29 18:36:23 proxmox106 corosync[30192]: [TOTEM ] FAILED TO RECEIVE
Aug 29 18:36:26 proxmox106...
Hi,
I also have a problem with an NFS share.
I have a cluster of two nodes to which is attached a NAS with NFS share and has always been running smoothly; last week I added a blade, the same version of Proxmox (3.4-11) and also configured the NFS storage.
The problem is that the new blade does...
I have a cluster of 3 servers with cepth storage over 9 disks (3 each server).
One osd is going down/out and so I "remove" it, after that system start to rebalance data over the remainig osd but after some hours rebalance is stopping with 1 page stuck unclean:
# ceph -s...
Thanks fireon e macday,
I have the same problem a and I tried your solution without success:
proxmox00:~# dmidecode -t 0
# dmidecode 2.11
SMBIOS 2.6 present.
Handle 0x0000, DMI type 0, 24 bytes
BIOS Information
Vendor: Intel Corp.
Version: SE5C600.86B.01.03.0002.062020121504...
Thank You for reply,
I resolve in another way:
- add new HDD SATA to one node of the clustrer;
- add to that node a new storage "directory" using the new HDD;
- restore VM into that storage;
- live move the disk to ceph storage;
- remove the HDD SATA.
Lorenzo
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.