Search results

  1. Problem after network failure

    pvecm status Quorum information ------------------ Date: Wed Aug 14 20:29:31 2019 Quorum provider: corosync_votequorum Nodes: 9 Node ID: 0x00000001 Ring ID: 1320 Quorate: Yes Votequorum information ---------------------- Expected votes: 11...
  2. Problem after network failure

    Here's some additional info. proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve) pve-manager: 4.2-2 (running version: 4.2-2/725d76f0) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.2.6-1-pve: 4.2.6-36 lvm2: 2.02.116-pve2 corosync-pve: 2.3.5-2 libqb0: 1.0-1 pve-cluster: 4.0-39 qemu-server: 4.0-72...
  3. Problem after network failure

    Hey Everyone, We recently had a network failure in one of our data centers. The network failure caused all of the proxmox nodes in our customer to fence themselves. They're back up an running, and the cluster shows all nodes in, but we're having the following issues: 1. HA no longer works...
  4. LXC network doesn't work after host reboot

    I've seen other similar posts, but no one that posted a solution. Is this just a limitation of Proxmox 5?
  5. LXC network doesn't work after host reboot

    Hi, We're testing proxmox 5.2 running the latest enterprise version. We have a few LXC containers running on the hosts and managed through HA. When we reboot a host, all of the containers are started, but their network connections do not work. The only way to re-establish network connection to...
  6. Proxmox with Ceph Lumionous

    Ty again Tom. I'll really push that we upgrade. But just so I know, have you or anyone tested Proxmox v 4 with Ceph version 12?
  7. Proxmox with Ceph Lumionous

    Hi Tom, is that a requirement? We run a lot of containers on each host, migrating and updating would take days, and then we'd have to go through a lot of testing with the new version. We'll do it of course, if there are no other options.
  8. Proxmox with Ceph Lumionous

    Hi Guys, We're currently running a Promox 4.2 cluster with Ceph Infernalis. The Ceph managers run on external hardware, not the Proxmox hosts. We are considering upgrading to ceph Luminous for a bunch of reason. Everything looked good until I saw this on the ceph docs. WHICH CLIENT VERSIONS...
  9. Server Proxmox 4.1 reset every days

    Hi Felipe, I updated to the latest version. Now I'm experiencing this issue: https://forum.proxmox.com/threads/pveproxy-become-blocked-state-and-cannot-be-killed.24386/page-2
  10. pveproxy become blocked state and cannot be killed

    I just saw this exact same issue running the latest version of Proxmox. pveversion -v proxmox-ve: 4.4-84 (running kernel: 4.4.44-1-pve) pve-manager: 4.4-13 (running version: 4.4-13/7ea56165) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.4.44-1-pve: 4.4.44-84 lvm2: 2.02.116-pve3 corosync-pve...
  11. Server Proxmox 4.1 reset every days

    I've experienced the same issue intermittently with our Dell PowerEdge R620's I also see the time drift issue in our syslog. But it appears in the logs after the reboot and not before. Has anyone ever figured this one out?
  12. Network down on all LXC Containers

    Unfortunately we couldn't wait for a fix. So we wiped out all of the servers and installed an older, working version.
  13. Network down on all LXC Containers

    Anyone got any ideas. At this point all we can think of is trying to roll back to an older version of proxmox.
  14. Network down on all LXC Containers

    That failed right away. Here's the output lxc-start 20161025074154.576 DEBUG lxc_conf - conf.c:setup_caps:2057 - drop capability 'sys_time' (25) lxc-start 20161025074154.576 DEBUG lxc_conf - conf.c:setup_caps:2057 - drop capability 'sys_module' (16) lxc-start...
  15. Network down on all LXC Containers

    doing that now. I'll post the output as soon as it happens again.
  16. Network down on all LXC Containers

    Unfortunately no, but I see this in /var/log/messages Oct 22 18:08:08 affinitytarzana kernel: [15518.074006] device veth161i0 entered promiscuous mode Oct 22 18:08:09 affinitytarzana kernel: [15519.083450] vmbr0: port 17(veth164i0) entered forwarding state Oct 22 18:08:09 affinitytarzana...
  17. Network down on all LXC Containers

    Thanks for responding. Network Manager isn't installed on any of the containers. It's got to be something on the proxmox host. This doesn't happen on any of our other clusters, and the only difference is the version of Proxmox.
  18. Network down on all LXC Containers

    Hey Guys, We just experienced something that shook our confidence on Proxmox. We just set up a 5-node Proxmox cluster. All of the nodes are running the exact same version of Proxmox: root@Proxmox:/var/log# pveversion -v proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve) pve-manager: 4.3-1...
  19. Proxmox 4.3 Communication Problems

    Hi, We just created a new cluster using Proxmox VE 4.3. One node out of 11 randomly shows it's offline in the GUI. Here's the proxmox version information. root@ProxmoxLV4:~# pveversion -v proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve) pve-manager: 4.3-1 (running version: 4.3-1/e7cdc165)...
  20. Change Display Name in Web Interface

    Thanks for replying Wolfgang. Would ypuind shRingntje steps with me. We don't have any vms yet on these hosts. Or if it's easier I can remove these servers from the cluster change their hostnames and readd them.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!