Hi,
I need to configure a network bond with arp_interval and arp_ip_target instead of the usual miimon on my 6.4.x PVE. I have created this config in /etc/network/interfaces:
auto bond1
iface bond1 inet manual
bond-slaves enp5s0f0 enp5s01f
bond-mode active-backup...
Hi,
I am running a PBS on one PVE/Ceph node where all OSDs are 3.4 TiB WD REDs. This backup pool has become rather full and and I wonder if this is the reason, that GC runs for days. There is almost no CPU or storage I/O load on the system, but quite a number of snapshots from my PVE cluster...
Hi,
I started to notice that pve nodes get fenced, when the swap space, provided by zramswap, becomes full. This only happens when the pve node is performing backups of its guests. In such a case, messages like these are to be found in the system log.
A couple of RIP: 0010:_raw_spin_lock...
Hi,
I have two separate PVE clusters: one hosts my Ceph storage, while the other hosts only the guests. The PVE nodes do have 2 1GbE and 2 10GbE interfaces, where the 10GbE ones are configured as a LACP bond. I had all the communication run over different VLANs on that bonds and this led to...
Hi,
I am about to manually change the corosync config on my PVE cluster to introduce a 2ng Ring-Interface. I have read up on how to do that and although I am pretty sure, I got the config right, I was wondering if I could somehow prevent my nodes to be fenced, should I have messed up the new...
A couple of daya ago, we experienced an issue with a switch, which carried the corosync traffic for two of the 6 PVE hosts in our cluster. I can understand that PVE fenced those two hosts, but why did the other 4 ones rebooted as well? How can I fin out, what caused all my nodes to reboot...
Can it be, that there has been an error introduced in latest version of PVE? I do have configured a zpool and on that zpool is a folder, which I have set to be used for backups. However, every time I try to run a backup and select the folder, I am seeing this:
As you can see, the amount of...
I am quite curious… I experienced some HW issues (well, presumeably) on one of my Ceph nodes in my 3-node Ceph cluster. Everytime I rebootet the host, which then had been hung for some length of time, the Ceph mon on this host would be dysfunctional and I needed to re-setup the monitor. Is this...
I have migrated some guests from my OracleVM (Xen) to Proxmox. Amongst them are two Ubuntu 18.0.4 guests, which started up under Xen in, well… usual speeds. However, starting these guests under KVM/Proxmox causes them to take a long time (up to 150s) to begin the actual boot process, such as...
Ich versuche mich gerade an meinen ersten LXC containern und wollte eine kleine CentOS8 KVM als LX container nachbauen. Also habe ich das CentOS8 lxc template heruntergeladen und damit einen LX container erzeugt. Das geht auch alles, aber nachdem ich innerhalb des Centos8 Conainers ein yum...
I am still exploring the two most interesting storage solultions for my home/lab setup, which are Ceph and ZFS. I know I can make Ceph be fine with just 3 or 4 OSDs running on the same node, if the crush_map gets tweaked accordingly. However, I am already a ZFS old-fart and I also looked into...
Hi,
I am just starting out with Proxmox coming from Oracle VM, which has been EOL'ed. I installed a three node PVE 6.1 cluster with Ceph enabled. However, I do want to keep the Ceph storage nodes free of VMs and I was wondering, if I can just install PVE on another host and have it connect to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.