ProxMox SSL Setup Not Working

infinityM

Well-Known Member
Dec 7, 2019
179
1
58
31
Hey Guys,

I am very new to proxmox, So please excuse the ignorance. I have been looking around but found no answer, Hopefully there's one here ;).

I have setup a proxmox clustere with an additional node, But if I try to do anything through the cluster, it says communication error because there's no SSL.
So I have been trying to setup an SSL certificate on the servers with no success.
I am using this article https://pve.proxmox.com/wiki/HTTPS_Certificate_Configuration_(Version_4.x,_5.0_and_5.1) to install the SSL, but when I run
mkdir /etc/pve/.le I am met with a message mkdir: cannot create directory ‘/etc/pve/.le’: Permission denied and I can't proceed...

Has anyone found this issue? How can I get past it and install the ssl?
 
I have setup a proxmox clustere with an additional node, But if I try to do anything through the cluster, it says communication error because there's no SSL.

That and the error you get when doing the mkdir sound like your join failed.
What's the output of the following two commands:
Code:
pvecm status
systemctl status corosync pve-cluster
?

Normally you do not need to follow the wiki article you posted, the certificate of the joining node should be automatically re-generated with a new one, signed by the CA from the cluster it joined.
 
Hey t.lamprecht,

I haven't been able to issue the ssl on either one of the two servers...

please check below for the response on the two commands.

root@pve:~# pvecm status
Quorum information
------------------
Date: Sun Dec 8 19:54:31 2019
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/462688
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 129.232.156.114 (local)
root@pve:~# systemctl status corosync pve-cluster
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-12-06 19:06:02 SAST; 2 days ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 24877 (corosync)
Tasks: 9 (limit: 4915)
Memory: 735.8M
CGroup: /system.slice/corosync.service
└─24877 /usr/sbin/corosync -f

Dec 08 19:54:39 pve corosync[24877]: [QUORUM] Members[1]: 1
Dec 08 19:54:39 pve corosync[24877]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 08 19:54:40 pve corosync[24877]: [TOTEM ] A new membership (1:462716) was formed. Members
Dec 08 19:54:40 pve corosync[24877]: [CPG ] downlist left_list: 0 received
Dec 08 19:54:40 pve corosync[24877]: [QUORUM] Members[1]: 1
Dec 08 19:54:40 pve corosync[24877]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 08 19:54:42 pve corosync[24877]: [TOTEM ] A new membership (1:462720) was formed. Members
Dec 08 19:54:42 pve corosync[24877]: [CPG ] downlist left_list: 0 received
Dec 08 19:54:42 pve corosync[24877]: [QUORUM] Members[1]: 1
Dec 08 19:54:42 pve corosync[24877]: [MAIN ] Completed service synchronization, ready to provide service.

● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-12-06 19:06:01 SAST; 2 days ago
Process: 24867 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Process: 24875 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Main PID: 24870 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 36.6M
CGroup: /system.slice/pve-cluster.service
└─24870 /usr/bin/pmxcfs

Dec 08 13:06:00 pve pmxcfs[24870]: [dcdb] notice: data verification successful
Dec 08 14:06:01 pve pmxcfs[24870]: [dcdb] notice: data verification successful
 
I am not sure if it has anything to do with it, but I see I am getting the following messages on the node aswell:
TASK ERROR: command 'apt-get update' failed: exit code 100
 
That and the error you get when doing the mkdir sound like your join failed.
What's the output of the following two commands:
Code:
pvecm status
systemctl status corosync pve-cluster
?

Normally you do not need to follow the wiki article you posted, the certificate of the joining node should be automatically re-generated with a new one, signed by the CA from the cluster it joined.
Bud, Do you have any advise for me? I'm still stuck here? :(
 
your nodes did not joined correctly, you're not quorate. Maybe they just do not have network access to each other. You first need to fix that, then an pvecm updatecerts should fix the certificate issue.

Are the nodes in the same LAN network?

post
Code:
cat /etc/pve/corosync.conf
ip addr

from both nodes, then I can see if I can direct you for a fixed corosync config which you can deploy to get out of this mess.
 
your nodes did not joined correctly, you're not quorate. Maybe they just do not have network access to each other. You first need to fix that, then an pvecm updatecerts should fix the certificate issue.

Are the nodes in the same LAN network?

post
Code:
cat /etc/pve/corosync.conf
ip addr

from both nodes, then I can see if I can direct you for a fixed corosync config which you can deploy to get out of this mess.
Thank you for the quick response :).

I checked, and I was able to ping each server from the other one. But I ran the two checks for you...


Main Server (114):

cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: c1
nodeid: 2
quorum_votes: 1
ring0_addr: 129.232.156.115
}
node {
name: pve
nodeid: 1
quorum_votes: 1
ring0_addr: 129.232.156.114
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: MainCluster
config_version: 2
interface {
linknumber: 0
}
ip_version: ipv4-6
secauth: on
version: 2
}

root@pve:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether bc:30:5b:ce:24:6a brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:ce:24:6c brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:ce:24:6e brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:ce:24:70 brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether bc:30:5b:ce:24:6a brd ff:ff:ff:ff:ff:ff
inet 129.232.156.114/28 brd 129.232.156.127 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::be30:5bff:fece:246a/64 scope link
valid_lft forever preferred_lft forever
root@pve:~# ping 129.232.156.115
PING 129.232.156.115 (129.232.156.115) 56(84) bytes of data.
64 bytes from 129.232.156.115: icmp_seq=1 ttl=64 time=0.235 ms
64 bytes from 129.232.156.115: icmp_seq=2 ttl=64 time=0.170 ms
^C
--- 129.232.156.115 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 14ms
rtt min/avg/max/mdev = 0.170/0.202/0.235/0.035 ms


Node1 (115):

cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: c1
nodeid: 2
quorum_votes: 1
ring0_addr: 129.232.156.115
}
node {
name: pve
nodeid: 1
quorum_votes: 1
ring0_addr: 129.232.156.114
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: MainCluster
config_version: 2
interface {
linknumber: 0
}
ip_version: ipv4-6
secauth: on
version: 2
}

root@c1:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether d4:85:64:5f:bd:bc brd ff:ff:ff:ff:ff:ff
3: enp2s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:85:64:5f:bd:be brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d4:85:64:5f:bd:bc brd ff:ff:ff:ff:ff:ff
inet 129.232.156.115/28 brd 129.232.156.127 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::d685:64ff:fe5f:bdbc/64 scope link
valid_lft forever preferred_lft forever
root@c1:~# ping 129.232.156.114
PING 129.232.156.114 (129.232.156.114) 56(84) bytes of data.
64 bytes from 129.232.156.114: icmp_seq=1 ttl=64 time=0.208 ms
64 bytes from 129.232.156.114: icmp_seq=2 ttl=64 time=0.214 ms
^C
--- 129.232.156.114 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 8ms
rtt min/avg/max/mdev = 0.208/0.211/0.214/0.003 ms
 
your nodes did not joined correctly, you're not quorate. Maybe they just do not have network access to each other. You first need to fix that, then an pvecm updatecerts should fix the certificate issue.

Are the nodes in the same LAN network?

post
Code:
cat /etc/pve/corosync.conf
ip addr

from both nodes, then I can see if I can direct you for a fixed corosync config which you can deploy to get out of this mess.
Bud, Do you have any news for me? I'm still kinda stuck D:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!