If a template like centos has no sshd ... debian has !
simply follow these steps...
1) download a normal lxc / openvz template
2) create an lxc container with this template
3) boot it
4) open it (pct enter <id>)
5) install / modificat everything you want
6) remove all network interfaces
7)...
sorry list command is for already downloaded images ...
pveam available --section system
system alpine-3.3-default_20160427_amd64.tar.xz
system alpine-3.4-default_20161206_amd64.tar.xz
system alpine-3.5-default_20170504_amd64.tar.xz
system...
update your container list "pveam update"
install a new container based on for example:
pveam list local
...
local:vztmpl/debian-9.0-standard_9.0-2_amd64.tar.gz 188.48MB
sshd is running without any problems ...
strange i bet your modinfo would issue a error ...
so to find out the module complaining "disagrees about version of symbol module_layout" ... by dmesg
perhaps a "apt-get update && apt-get dist-upgrade " will solve your problem ....
Hi
this is typically a outdated kernel .... message loading nfs ...
modinfo should read:
modinfo nfs
filename: /lib/modules/4.10.17-3-pve/kernel/fs/nfs/nfs.ko
license: GPL
author: Olaf Kirch <okir@monad.swb.de>
alias: nfs4
alias: fs-nfs4
alias...
you use enterprise repo ... do you have a subscription ? I think no ... so you should use:
deb http://download.proxmox.com/debian/pve stretch pvetest
and
cat /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-luminous stretch main
Thx for this hint !
but i'm still stuck ....
I modified ceph.conf
ms_type=async+rdma
ms_cluster_type = async+rdma
ms_async_rdma_device_name=mlx4_0
[\code]
and NO GID parameter ! because ceph.conf is common to all nodes and each node has a distinct GID ... or am i wrong ?
root@pve01:~#...
fabian, thanks for response, but we have no error in regular operations.
i only have problems to activate RDMA, to optimize OSD bluestore operations ....
Is no customer known, operating mellanox equipment with proxmox 5 and RDMA over ethernet, not infiniband ... ?
alex and please someone @dietmar ... I have no glue how to get rdma working on strech !
for example rping does not work ...
rping -s -v 192.168.100.141
rdma_create_event_channel: No such device
ifconfig ens1d1
ens1d1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet...
consider upgrade to pve5! also bluestore in ceph is exiting !
btw i operate port1 with public network connectet to sx1012 switch and ceph on port2 on same switch but these ports are tagged as vlan on switch... no need to have vlan on port2 for ceph ... I found multicast might be a problem if...
might be a issue with flow control on card and on switch ... please check these settings
i have no problems on pve5 and stretch and x3-pro running on 56Mbit and mellanox switch ... see my signature ..
I updated firmware of x3-pro to latest and all things are smooth :)
maybe these settings must be done ?
https://community.mellanox.com/docs/DOC-2693
...
Open /etc/security/limits.conf and add the following lines to ping the memory.The RDMA is tightly coupled with the physical memory address.
* soft memlock unlimited
* hard memlock unlimited
root soft...
alex,
i have connectx3 pro which card do you have ?
81:00.0 Ethernet controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
I have no glue if this card is RDMA capable ....
... and fetching source from git and compiling ??? puhh this will perhaps disturb pve modules interacting...
same cluster but now bluestore, Tom!
some 10% better performance!
I think with RDMA enabled we will gain another 10% to 20% speedup
but I still have no glue how to accomplish RDAM with ceph and underlying 56 Gbit Mellanox 3x cards ....
rados bench -p test 60 write --no-cleanup -t 256
hints =...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.