pveceph -install. I found the answer though here in the forum. I had to enable the non-subscription repo. I didn't have to do that for the other servers in this cluster that I installed about a year ago.
Hi,
I'm trying to add a couple new nodes to an existing proxmox cluster running the latest version of proxmox 5. When I try to upgrade to ceph luminous I get the following error:
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W...
pvecm status
Quorum information
------------------
Date: Wed Aug 14 20:29:31 2019
Quorum provider: corosync_votequorum
Nodes: 9
Node ID: 0x00000001
Ring ID: 1320
Quorate: Yes
Votequorum information
----------------------
Expected votes: 11...
Hey Everyone,
We recently had a network failure in one of our data centers. The network failure caused all of the proxmox nodes in our customer to fence themselves. They're back up an running, and the cluster shows all nodes in, but we're having the following issues:
1. HA no longer works...
Hi,
We're testing proxmox 5.2 running the latest enterprise version. We have a few LXC containers running on the hosts and managed through HA. When we reboot a host, all of the containers are started, but their network connections do not work. The only way to re-establish network connection to...
Hi Tom, is that a requirement? We run a lot of containers on each host, migrating and updating would take days, and then we'd have to go through a lot of testing with the new version.
We'll do it of course, if there are no other options.
Hi Guys,
We're currently running a Promox 4.2 cluster with Ceph Infernalis. The Ceph managers run on external hardware, not the Proxmox hosts.
We are considering upgrading to ceph Luminous for a bunch of reason. Everything looked good until I saw this on the ceph docs.
WHICH CLIENT VERSIONS...
Hi Felipe,
I updated to the latest version. Now I'm experiencing this issue: https://forum.proxmox.com/threads/pveproxy-become-blocked-state-and-cannot-be-killed.24386/page-2
I just saw this exact same issue running the latest version of Proxmox.
pveversion -v
proxmox-ve: 4.4-84 (running kernel: 4.4.44-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.44-1-pve: 4.4.44-84
lvm2: 2.02.116-pve3
corosync-pve...
I've experienced the same issue intermittently with our Dell PowerEdge R620's
I also see the time drift issue in our syslog. But it appears in the logs after the reboot and not before. Has anyone ever figured this one out?
Unfortunately no, but I see this in /var/log/messages
Oct 22 18:08:08 affinitytarzana kernel: [15518.074006] device veth161i0 entered promiscuous mode
Oct 22 18:08:09 affinitytarzana kernel: [15519.083450] vmbr0: port 17(veth164i0) entered forwarding state
Oct 22 18:08:09 affinitytarzana...
Thanks for responding. Network Manager isn't installed on any of the containers. It's got to be something on the proxmox host. This doesn't happen on any of our other clusters, and the only difference is the version of Proxmox.
Hey Guys,
We just experienced something that shook our confidence on Proxmox. We just set up a 5-node Proxmox cluster. All of the nodes are running the exact same version of Proxmox:
root@Proxmox:/var/log# pveversion -v
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.