tried but still no luck:
lxc_map_ids: 3672 newgidmap failed to write mapping "newgidmap: gid range [1000-1001) -> [1000-1001) not allowed": newgidmap 464103 0 100000 1000 1000 1000 1 1001 101001 64535
lxc_spawn: 1791 Failed to set up id mapping.
__lxc_start: 2074 Failed to spawn container "103"...
Hello,
I'm trying to pass two mounted hard disks to container (103)
when I start it I get:lxc_map_ids: 3672 newuidmap failed to write mapping "newuidmap: uid range [1000-1001) -> [1000-1001) not allowed": newuidmap 458165 0 100000 1000 1000 1000 1 1001 101001 64530
lxc_spawn: 1791 Failed to set...
I tried to do if as stated:
https://forum.proxmox.com/threads/reinstall-ceph-on-proxmox-6.57691/page-2#post-300278
but no luck.
Those are clean cluster, I will reinstall proxmox on all 3 nodes
I also found this
(don't know if it's causes the problem) in:
/var/log/ceph/ceph-mon.pve1.log
2022-01-26T17:28:38.760+0100 7fd0145f2580 -1 monitor data directory at '/var/lib/ceph/mon/ceph-pve1' does not exist: have you run 'mkfs'?
2022-01-26T17:28:49.005+0100 7f7ae6332580 0 set uid:gid to...
Hello,
I have three node proxmox cluster:
optiplex 7020
xeon e3-1265lv3
16GB
120GB SSD for OS
512GB nvme for ceph
1GbE network for "external" access
dual 10GbE network (for cluster)
Network is connected as stated here:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
as routed...
I checked all networks settings, nodes were rebooted, ping working for each other machines,
I know that I messing with thread about network, but the problem is I cannot run ceph.
There were no errors during installation;
I did everything as in doc but ceph gives me timeout error;
this is my...
Here you are:
pve1:
auto lo
iface lo inet loopback
iface eno1 inet manual
mtu 9000
auto enp1s0f0
iface enp1s0f0 inet static
address 192.168.20.10/24
mtu 9000
up ip route add 192.168.20.30/32 dev enp1s0f0
down ip route del 192.168.20.30/32
auto enp1s0f1...
OK - second option:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
routed setup works great,
I have:
bridge on 1GbE card for external access
and dual 10GbE for mesh cluster for each node.
All cards set MTU at 9000.
But ceph gives me error on each node:
root@pve2:~# pveceph status...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.