tried but still no luck:
lxc_map_ids: 3672 newgidmap failed to write mapping "newgidmap: gid range [1000-1001) -> [1000-1001) not allowed": newgidmap 464103 0 100000 1000 1000 1000 1 1001 101001 64535
lxc_spawn: 1791 Failed to set up id mapping.
__lxc_start: 2074 Failed to spawn container "103"...
Hello,
I'm trying to pass two mounted hard disks to container (103)
when I start it I get:lxc_map_ids: 3672 newuidmap failed to write mapping "newuidmap: uid range [1000-1001) -> [1000-1001) not allowed": newuidmap 458165 0 100000 1000 1000 1000 1 1001 101001 64530
lxc_spawn: 1791 Failed to set...
I tried to do if as stated:
https://forum.proxmox.com/threads/reinstall-ceph-on-proxmox-6.57691/page-2#post-300278
but no luck.
Those are clean cluster, I will reinstall proxmox on all 3 nodes
I also found this
(don't know if it's causes the problem) in:
/var/log/ceph/ceph-mon.pve1.log
2022-01-26T17:28:38.760+0100 7fd0145f2580 -1 monitor data directory at '/var/lib/ceph/mon/ceph-pve1' does not exist: have you run 'mkfs'?
2022-01-26T17:28:49.005+0100 7f7ae6332580 0 set uid:gid to...
Hello,
I have three node proxmox cluster:
optiplex 7020
xeon e3-1265lv3
16GB
120GB SSD for OS
512GB nvme for ceph
1GbE network for "external" access
dual 10GbE network (for cluster)
Network is connected as stated here:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
as routed...
I checked all networks settings, nodes were rebooted, ping working for each other machines,
I know that I messing with thread about network, but the problem is I cannot run ceph.
There were no errors during installation;
I did everything as in doc but ceph gives me timeout error;
this is my...
Here you are:
pve1:
auto lo
iface lo inet loopback
iface eno1 inet manual
mtu 9000
auto enp1s0f0
iface enp1s0f0 inet static
address 192.168.20.10/24
mtu 9000
up ip route add 192.168.20.30/32 dev enp1s0f0
down ip route del 192.168.20.30/32
auto enp1s0f1...
OK - second option:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
routed setup works great,
I have:
bridge on 1GbE card for external access
and dual 10GbE for mesh cluster for each node.
All cards set MTU at 9000.
But ceph gives me error on each node:
root@pve2:~# pveceph status...
Hi,
I made fresh installation for 3 cluster node with network as follows:
is there any option to set up a cluster?
if not anything I can do to set up a cluster?
Now each node has hosts to point to other two nodes, and at OS level communication are working properly
But at set up cluster seems...
Thank for you answer;
but unfortunately when I pass 04:00 the pfSense cannot find any network interfaces ...
(it should work as it works when I pass second port only)
Hello,
I have Asrock J4105M mobo with proxmox 5.3 and Intel dual network card which I would like to pass to pfSense.
04:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev ff)
04:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller...
I bought ASRock J4105M (micro ATX) with 3 PCIe (2 x x1 and 1 x x16 (logically x1)) and passing through for network and dell perc h310 raid card working properly. Act as NAS/router/home automation combo ;-)
Also works with PCIe to NVME adapter as system disk.
Inside container there's /dev/sdc and /dev/sdb.
Maybe there's problem that my drives are formated as whole /dev/sdb (and /dev/sdc) but not contains any partition.
I would like not to reformat them due to that they contain big amount of data.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.