i plan on building ceph setup of 5 nodes with proxmox corosync in mesh over 1Gb QUAD NIC, ceph pub/priv in mesh over 25Gb QUAD NIC and VM access network (also used as 2nd ring for proxmox coro) over 25Gb NIC connected to redundant 25Gb switches. We don't see ourselves expanding past 5 nodes...
I have a proxmox test server with two nics. One nic for management is tied to local network with no internet access. We have separate network with internet access and I tied it to second bridge. Is it possible to update proxmox through GUI/CLI via the interface tied to the network with...
I've searched the forum for this topic and didn't see any hits. I want to deploy proxmox hopefully a cluster with ceph however our dilemma is that we do not allow connection from outside of US. Servers are actually isolated and get updates from local repos. Is there a US based proxmox...
i have proxmox host with public ips assigned to host and few containers inside. i created a bridge to create private network which works fine between the containers. i set up pfsense in a vm with a public and private ip (126.96.36.199). then i have openvpn pfsense s2s from my home to the...
i have proxmox set up in dc with public ip. few lxc containers running with private ip each one has private ip as well and i can ping locally between containers. i set up another kvm with pfsense with priv/pub ips and have site to site ipsec to another network.
my local proxmox network...
i have proxmox in dc and did some speedtest-cli runs. all with public ips assigned, all up to date
lxc deb9 template
lxc deb8 template
also have w10 kvm with virtio just for comparison
none of the interfaces have set rate limit...
i installed proxmox in DC and got 5 usable IPs. this is fine but i wonder if there's a way to share host WAN IP (1 of 5) between 2 or more LXC containers.
i created a bridge vmbr0 to a WAN port eno1
i also created another bridge vmbr1 not connected to any port for internal network...
I just installed proxmox 5 and upgraded to the latest version. i downloaded template within proxmox for debian 9.0 standard. i then created a lxc container from it and gave it an ip of 192.168.1.57/32
Template has sshd configured by default. when i try to ssh to it, it's not even reaching...
i have one VM that i'm trying to add 20G to.
--- Logical volume ---
LV Path /dev/NYPH/vm-105-disk-1
LV Name vm-105-disk-1
VG Name NYPH
LV UUID odLtwF-C2SH-UREn-oPs6-ZKAe-rwQw-N9Qskg
LV Write Access read/write
i have 4 nodes in a cluster and i see that my SAN (ISCSI with LVM) sotrage has extra disks that look like are leftovers from the deleted VMs
i have two config files for old VMs
May 10 2016 113.conf.1939.tmp
Sep 15 2016 200.conf.14637.tmp
i upgraded to 4.4-13/7ea56165 over the weekend a cluster of 4 nodes and I just noticed that all my VMs show 512MB of memory in hardware, the ISCSI raw disks are not listed (FREENAS), and no stats. Is anyone else experiencing this problem ? i rebooted every node after the upgrade as well. all the...
pve-manager/4.4-1/eb2d6f1e (running kernel: 4.4.35-1-pve)
with ISCSI backend for VM storage and NFS backup. i believe the backup failed and casued all nodes go dark, still pingable and accessible
root@px1:~# pvecm status
Date: Tue Jan...
im trying to install the latest debian (netinstall cd) jessie in a new kvm. the install completes fine but then when it says at the end of the installation process to rebbot the kvm starts booting from the hard drive (tried ide and virtio disks) and it's stuck on recovering journal, clearing...
three of my 4 cluster nodes show up with red marks but all the VMs are operational
i var/log/messages i see
Sep 22 16:17:31 px3 kernel: [8359563.040449] pvestatd D ffff88010970bdf8 0 18963 1 0x00000004
Sep 22 16:17:31 px3 kernel: [8359563.040452] ffff88010970bdf8...
hi i have 4 node cluster px1-4. due to network power loss the cluster was split
on px1 the faulty one i see
@px1:/etc/pve# pvecm status
Date: Fri Jun 24 12:12:25 2016
Quorum provider: corosync_votequorum
i have a 4 node cluster hooked up to a SAN
today i got notification email from node1 (master node)
Job for pveproxy.service failed. See 'systemctl status pveproxy.service' and 'journalctl -xn' for details.
Job for spiceproxy.service failed. See 'systemctl status...
mistakenly i did a hard reset to a node #2 in a 4 node cluster and now the node is gone from the cluster when i try to readd it says
authentication key already exists
is there a way to readd it? are the VMs going to start up ?
im on 4.2-2/725d76f0
i have 3 proxmox cluster that i created connected to iscsi storage (with LVM over it). i used zfs raid1 for during proxmox install on each node. Proxmox ZFS storage wiki explains how to limit zfs memory usage, but i don't see zfs.conf in /etc/modprobe.d/
Is this only for zfs storage that's...
i have two nodes in a cluster and will add a third one soon.
I was on a flat network but switched to VLAN (66) over the weekend to incorportate DMZ and another secure network.
so now my flat network is on VLAN and everything is on it including proxmox hosts (PH). still on 3.4 will be...
I have 2 node cluster with ISCSI storage. Now that ZFS is supported can I create zpool on each node (from fast SSDs) and migrate the VMs to it? As far as monitoring the zpool/health is it implemented in GUI or does it need to be scripted / same goes for drive replacement in case of failure...