new cluster, ceph monitor unknown status

bu2chlc

Member
Jan 27, 2021
15
0
6
53
I created a cluster with 3 machines (10.10.10.x/24 cluster and 192.168.1.X/24 public) . I set up ceph on all three machines starting with "node1". Now I have node2 and node3 saying the ceph monitor is stopped. I click start, it says it started but status still shows stopped. If I try to destroy the monitor, it says mon.node2 does not exist. If I try to create the monitor(choosing "node2" again) I get the error that node already exists. I have the same problem on "node3". I cannot run pveceph purge because it claims there are still monitors. Not sure what my next step should be.
 
I am seeing the same. Attempting to get a test cluster working on three Dell 1950s. The nodes can freely communicate on 172.29.99.0/24 on .48, .49, .50.

This is the ceph configuration I ended up with:

Proxmox
Virtual Environment 6.3-2
Node 'pve0'

[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 172.22.128.1/29
fsid = 07b6e25e-f35d-4d76-82b6-4e6d1eb8e936
mon_allow_pool_delete = true
mon_host = 172.29.99.48
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 172.29.99.48/24

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pve0]
public_addr = 172.29.99.48

[mon.pve1]
public_addr = 172.29.99.49

[mon.pve2]
public_addr = 172.29.99.50

And the CRUSH Map:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 zone
type 10 region
type 11 root

# buckets
host pve0 {
id -3 # do not change unnecessarily
id -4 class hdd # do not change unnecessarily
# weight 0.133
alg straw2
hash 0 # rjenkins1
item osd.0 weight 0.133
}
host pve1 {
id -5 # do not change unnecessarily
id -6 class hdd # do not change unnecessarily
# weight 0.133
alg straw2
hash 0 # rjenkins1
item osd.1 weight 0.133
}
host pve2 {
id -7 # do not change unnecessarily
id -8 class hdd # do not change unnecessarily
# weight 0.133
alg straw2
hash 0 # rjenkins1
item osd.2 weight 0.133
}
root default {
id -1 # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
# weight 0.400
alg straw2
hash 0 # rjenkins1
item pve0 weight 0.133
item pve1 weight 0.133
item pve2 weight 0.133
}

# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map
Logs

# ceph -s
cluster:
id: 07b6e25e-f35d-4d76-82b6-4e6d1eb8e936
health: HEALTH_OK

services:
mon: 1 daemons, quorum pve0 (age 2d)
mgr: pve0(active, since 2d)
osd: 3 osds: 3 up (since 2d), 3 in (since 8d)

data:
pools: 2 pools, 33 pgs
objects: 3.43k objects, 13 GiB
usage: 41 GiB used, 369 GiB / 410 GiB avail
pgs: 33 active+clean
 

Attachments

  • Screen Shot 2021-02-10 at 9.12.49 AM.png
    Screen Shot 2021-02-10 at 9.12.49 AM.png
    166.3 KB · Views: 51
They can communicate on both the cluster and public networks. The nodes seem to be able to communicate on the webgui/dashboard only connecting to .48. I can motion VMs around the ceph cluster, etc. Not sure what communication channel is thought to be not working?

root@pve0:~# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether 00:1d:09:68:d7:62 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:1d:09:68:d7:64 brd ff:ff:ff:ff:ff:ff
4: enp12s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:10:18:c4:f8:88 brd ff:ff:ff:ff:ff:ff
5: enp12s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:10:18:c4:f8:88 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:10:18:c4:f8:88 brd ff:ff:ff:ff:ff:ff
inet 172.22.128.1/29 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::210:18ff:fec4:f888/64 scope link
valid_lft forever preferred_lft forever
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:1d:09:68:d7:62 brd ff:ff:ff:ff:ff:ff
inet 172.29.99.48/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::21d:9ff:fe68:d762/64 scope link
valid_lft forever preferred_lft forever
root@pve0:~# ping 172.29.99.49
PING 172.29.99.49 (172.29.99.49) 56(84) bytes of data.
64 bytes from 172.29.99.49: icmp_seq=1 ttl=64 time=0.179 ms
64 bytes from 172.29.99.49: icmp_seq=2 ttl=64 time=0.111 ms
^C
--- 172.29.99.49 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 20ms
rtt min/avg/max/mdev = 0.111/0.145/0.179/0.034 ms
root@pve0:~# ping 172.29.99.50
PING 172.29.99.50 (172.29.99.50) 56(84) bytes of data.
64 bytes from 172.29.99.50: icmp_seq=1 ttl=64 time=0.126 ms
64 bytes from 172.29.99.50: icmp_seq=2 ttl=64 time=0.105 ms
^C
--- 172.29.99.50 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 28ms
rtt min/avg/max/mdev = 0.105/0.115/0.126/0.015 ms
root@pve0:~# ping 172.22.128.2
PING 172.22.128.2 (172.22.128.2) 56(84) bytes of data.
64 bytes from 172.22.128.2: icmp_seq=1 ttl=64 time=0.100 ms
64 bytes from 172.22.128.2: icmp_seq=2 ttl=64 time=0.065 ms
^C
--- 172.22.128.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 18ms
rtt min/avg/max/mdev = 0.065/0.082/0.100/0.019 ms
root@pve0:~# ping 172.22.128.3
PING 172.22.128.3 (172.22.128.3) 56(84) bytes of data.
64 bytes from 172.22.128.3: icmp_seq=1 ttl=64 time=0.111 ms
64 bytes from 172.22.128.3: icmp_seq=2 ttl=64 time=0.125 ms
^X64 bytes from 172.22.128.3: icmp_seq=3 ttl=64 time=0.138 ms
^X64 bytes from 172.22.128.3: icmp_seq=4 ttl=64 time=0.076 ms
^C
--- 172.22.128.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 61ms
rtt min/avg/max/mdev = 0.076/0.112/0.138/0.025 ms
root@pve0:~#
 
Well, that's from one host. To make certain that packets go in all directions, pinging any interface from each node in the cluster should be done. If that works, then the next step is to check that the service are running and that the nodes can communicate on their ports with each other.
 
Right but a successful ping proves two-way traffic but I will humor you. The only way it would establish one direction and not in the other would be the presence of something like a stateful firewall and there is no such thing on this segment.

From pve0 (172.29.99.48):


Last login: Wed Feb 10 09:51:17 EST 2021 on pts/1
Linux pve0 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@pve0:~# ssh root@172.29.99.49
Linux pve1 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Feb 4 22:35:41 2021
root@pve1:~# logout
Connection to 172.29.99.49 closed.
root@pve0:~# ssh root@172.29.99.50
Linux pve2 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Feb 1 22:37:26 2021
root@pve2:~# logout
Connection to 172.29.99.50 closed.
root@pve0:~#

From pve1 (172.29.99.49):

Last login: Wed Feb 10 10:26:34 EST 2021 from 172.29.99.48 on pts/0
Linux pve1 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@pve1:~# ssh root@172.29.99.48
Linux pve0 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Feb 10 10:25:45 2021
root@pve0:~# logout
Connection to 172.29.99.48 closed.
root@pve1:~# ssh root@172.29.99.50
Linux pve2 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Feb 10 10:26:18 2021 from 172.29.99.48
root@pve2:~# logout
Connection to 172.29.99.50 closed.
root@pve1:~#

From pve2 (172.29.99.50):

Last login: Wed Feb 10 10:27:11 EST 2021 from 172.29.99.48 on pts/0
Linux pve2 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@pve2:~# ssh root@172.29.99.48
Linux pve0 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Feb 10 10:26:46 2021 from 172.29.99.49
root@pve0:~# logout
Connection to 172.29.99.48 closed.
root@pve2:~# ssh root@172.29.99.49
Linux pve1 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Feb 10 10:26:35 2021
root@pve1:~# logout
Connection to 172.29.99.49 closed.
root@pve2:~#

> then the next step is to check that the service are running and that the nodes can communicate on their ports with each other.

The service not showing as starting is a symptom of the problem.

root@pve0:~# telnet 172.29.99.49 6789
Trying 172.29.99.49...
Connected to 172.29.99.49.
Escape character is '^]'.
ceph v027c1ᆬc0^]
telnet> quit
Connection closed.
root@pve0:~# telnet 172.29.99.50 6789
Trying 172.29.99.50...
Connected to 172.29.99.50.
Escape character is '^]'.
ceph v027c2c0^]
telnet> quit
Connection closed.
root@pve0:~#

root@pve1:/var/log/pve/tasks# telnet 172.29.99.48 6789
Trying 172.29.99.48...
Connected to 172.29.99.48.
Escape character is '^]'.
ceph v027c0c1^]
telnet> quit
Connection closed.
root@pve1:/var/log/pve/tasks# telnet 172.29.99.50 6789
Trying 172.29.99.50...
Connected to 172.29.99.50.
Escape character is '^]'.
ceph v027c2ъc1^]
telnet> quit
Connection closed.

root@pve2:~# telnet 172.29.99.48 6789
Trying 172.29.99.48...
Connected to 172.29.99.48.
Escape character is '^]'.
ceph v027c0&c2^]
telnet> quit
Connection closed.
root@pve2:~# telnet 172.29.99.49 6789
Trying 172.29.99.49...
Connected to 172.29.99.49.
Escape character is '^]'.
ceph v027c1֬c2^]
telnet> quit
Connection closed.
root@pve2:~#

Code:
root@pve0:~# ss -tulw
Netid    State     Recv-Q    Send-Q       Local Address:port         Peer Address:port  
udp      UNCONN    0         0                  0.0.0.0:sunrpc            0.0.0.0:*     
udp      UNCONN    0         0                127.0.0.1:863               0.0.0.0:*     
udp      UNCONN    0         0                  0.0.0.0:34676             0.0.0.0:*     
udp      UNCONN    0         0                  0.0.0.0:54429             0.0.0.0:*     
udp      UNCONN    0         0             172.29.99.48:5405              0.0.0.0:*     
udp      UNCONN    0         0                     [::]:46837                [::]:*     
udp      UNCONN    0         0                     [::]:47313                [::]:*     
udp      UNCONN    0         0                     [::]:sunrpc               [::]:*     
tcp      LISTEN    0         64                 0.0.0.0:39243             0.0.0.0:*     
tcp      LISTEN    0         128                0.0.0.0:sunrpc            0.0.0.0:*     
tcp      LISTEN    0         512           172.22.128.1:6800              0.0.0.0:*     
tcp      LISTEN    0         512           172.29.99.48:6800              0.0.0.0:*     
tcp      LISTEN    0         512           172.22.128.1:6801              0.0.0.0:*     
tcp      LISTEN    0         512           172.29.99.48:6801              0.0.0.0:*     
tcp      LISTEN    0         512           172.22.128.1:6802              0.0.0.0:*     
tcp      LISTEN    0         512           172.29.99.48:6802              0.0.0.0:*     
tcp      LISTEN    0         512           172.22.128.1:6803              0.0.0.0:*     
tcp      LISTEN    0         512           172.29.99.48:6803              0.0.0.0:*     
tcp      LISTEN    0         512           172.29.99.48:6804              0.0.0.0:*     
tcp      LISTEN    0         512           172.29.99.48:6805              0.0.0.0:*     
tcp      LISTEN    0         128              127.0.0.1:85                0.0.0.0:*     
tcp      LISTEN    0         128                0.0.0.0:ssh               0.0.0.0:*     
tcp      LISTEN    0         128                0.0.0.0:3128              0.0.0.0:*     
tcp      LISTEN    0         128                0.0.0.0:58489             0.0.0.0:*     
tcp      LISTEN    0         100              127.0.0.1:smtp              0.0.0.0:*     
tcp      LISTEN    0         512           172.29.99.48:3300              0.0.0.0:*     
tcp      LISTEN    0         512           172.29.99.48:6789              0.0.0.0:*     
tcp      LISTEN    0         128                0.0.0.0:8006              0.0.0.0:*     
tcp      LISTEN    0         128                   [::]:sunrpc               [::]:*     
tcp      LISTEN    0         64                    [::]:38739                [::]:*     
tcp      LISTEN    0         128                   [::]:ssh                  [::]:*     
tcp      LISTEN    0         100                  [::1]:smtp                 [::]:*     
tcp      LISTEN    0         128                   [::]:45095                [::]:*     
root@pve0:~#

root@pve1:/var/log/pve/tasks# ss -tulw
Netid   State    Recv-Q   Send-Q       Local Address:port         Peer Address:port  
udp     UNCONN   0        0                  0.0.0.0:sunrpc            0.0.0.0:*     
udp     UNCONN   0        0                127.0.0.1:829               0.0.0.0:*     
udp     UNCONN   0        0                  0.0.0.0:50131             0.0.0.0:*     
udp     UNCONN   0        0             172.29.99.49:5405              0.0.0.0:*     
udp     UNCONN   0        0                  0.0.0.0:47780             0.0.0.0:*     
udp     UNCONN   0        0                     [::]:sunrpc               [::]:*     
udp     UNCONN   0        0                     [::]:33056                [::]:*     
udp     UNCONN   0        0                     [::]:40498                [::]:*     
tcp     LISTEN   0        512           172.29.99.49:3300              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.49:6789              0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:8006              0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:50153             0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:sunrpc            0.0.0.0:*     
tcp     LISTEN   0        512           172.22.128.2:6800              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.49:6800              0.0.0.0:*     
tcp     LISTEN   0        512           172.22.128.2:6801              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.49:6801              0.0.0.0:*     
tcp     LISTEN   0        512           172.22.128.2:6802              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.49:6802              0.0.0.0:*     
tcp     LISTEN   0        512           172.22.128.2:6803              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.49:6803              0.0.0.0:*     
tcp     LISTEN   0        128              127.0.0.1:85                0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:ssh               0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:3128              0.0.0.0:*     
tcp     LISTEN   0        64                 0.0.0.0:41753             0.0.0.0:*     
tcp     LISTEN   0        100              127.0.0.1:smtp              0.0.0.0:*     
tcp     LISTEN   0        128                   [::]:sunrpc               [::]:*     
tcp     LISTEN   0        128                   [::]:ssh                  [::]:*     
tcp     LISTEN   0        64                    [::]:37689                [::]:*     
tcp     LISTEN   0        128                   [::]:37241                [::]:*     
tcp     LISTEN   0        100                  [::1]:smtp                 [::]:*

root@pve2:~# ss -tulw
Netid   State    Recv-Q   Send-Q       Local Address:port         Peer Address:port  
udp     UNCONN   0        0                  0.0.0.0:sunrpc            0.0.0.0:*     
udp     UNCONN   0        0                127.0.0.1:841               0.0.0.0:*     
udp     UNCONN   0        0                  0.0.0.0:51001             0.0.0.0:*     
udp     UNCONN   0        0             172.29.99.50:5405              0.0.0.0:*     
udp     UNCONN   0        0                  0.0.0.0:41197             0.0.0.0:*     
udp     UNCONN   0        0                     [::]:sunrpc               [::]:*     
udp     UNCONN   0        0                     [::]:33352                [::]:*     
udp     UNCONN   0        0                     [::]:46885                [::]:*     
tcp     LISTEN   0        128                0.0.0.0:35939             0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.50:3300              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.50:6789              0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:8006              0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:sunrpc            0.0.0.0:*     
tcp     LISTEN   0        512           172.22.128.3:6800              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.50:6800              0.0.0.0:*     
tcp     LISTEN   0        512           172.22.128.3:6801              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.50:6801              0.0.0.0:*     
tcp     LISTEN   0        512           172.22.128.3:6802              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.50:6802              0.0.0.0:*     
tcp     LISTEN   0        64                 0.0.0.0:39923             0.0.0.0:*     
tcp     LISTEN   0        512           172.22.128.3:6803              0.0.0.0:*     
tcp     LISTEN   0        512           172.29.99.50:6803              0.0.0.0:*     
tcp     LISTEN   0        128              127.0.0.1:85                0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:ssh               0.0.0.0:*     
tcp     LISTEN   0        128                0.0.0.0:3128              0.0.0.0:*     
tcp     LISTEN   0        100              127.0.0.1:smtp              0.0.0.0:*     
tcp     LISTEN   0        128                   [::]:sunrpc               [::]:*     
tcp     LISTEN   0        128                   [::]:38005                [::]:*     
tcp     LISTEN   0        128                   [::]:ssh                  [::]:*     
tcp     LISTEN   0        100                  [::1]:smtp                 [::]:*     
tcp     LISTEN   0        64                    [::]:37405                [::]:*     
root@pve2:~#

I think we have established that network connectivity between these three nodes is fine.
 
Last edited:
The service not showing as starting is a symptom of the problem.
Interestingly the MON services seem to run, though only one MON is shown in the ceph -s. That's a different issue then the GUI. Is there anything in the ceph logs that might hint to why?

For the GUI/API, the service state is distributed by pvedaemon and queried through the pveproxy by the GUI. Maybe restarting those service and clearing the browser cache may help.

I think we have established that network connectivity between these three nodes is fine.
Thanks. Since I don't know the setup, I wanted to make sure. :)
 
It must have been something in the workflow when I created that other cluster. Must have done something wrong.

Created everything again and adding the two additional monitors just worked. I did have a clock sync problem but got that worked out in short order.

Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!