We've got two external ceph clusters both working fine with pve5.2 for RBDs.
From 1. cluster we mounted a CephFS using it to store vzdumps,
but we also would like to mount a CephFS from the 2. cluster.
1. cluster is found in /etc/ceph/ceph.conf + ceph.client.admin.keyring
2. cluster is...
Got a testlab with two cluster networks, a private 3x1Gbs bonded and a public 100Mbs:
I just want to make sure live migration always happens across ring0 network, thus I've assigned unqualified host names in /etc/hosts to ring0 NW:
only live mig. don't pick the ring0 (privat) network now but...
I'm trying to mount a CephFS of a Mimic Cluster with a Luminous Client on a PVE 5.2 node, but are seeing this:
Same mount works just fine on the Mimic Cluster CentOS7.5 nodes:
Want to upgrade an old 3.4 testlab connected to a Hammer Ceph cluster (i know :)
Plan is first to migrate VM images to a newly installed Ceph Mimic cluster, would it be possible to connect to both Ceph Cluster (eg. maybe by upgrading Ceph Client to +Jewel)?
Thinking it time to consider doing an upgrade from jessie 4.4 to latest 5.1 by following this 'in place upgrade' procedure and wondering if it could be an issue that we're using two corosync rings, HA clustering and shared storage from iSCSI array only?
If anything were to go wrong, could we...
Just attempted to patch an older testlab PVE 3.4 to latest patch levels.
Found a newer kernel pve-kernel-2.6.32-48-pve only when booting on this our openvswitch looked fine but could get traffic in/out through a bonded NIC plugged into the single vmbr1 ovs and thus no access to the ceph cluster...
If we boot a VM/guest on kernel 4.14.12 with KPTI enabled, it'll not longer show netfilter stats as on earlier kernels (4.13.4 and less), eg. always returning zero value by:
Can really find a good reason on the 'Net'.
Anyone knows why?
Last two live migrations of a VM running relative much network traffic seemed to crash the VM on target host at resume in virt-net driver. See attached SD from target VM console.
Got an older 7x node 3.4 testlab (running Ceph Hammer 0.94.9 on 4x of the nodes and only VMs on 3x nodes), which we wanted to patch up today, but after rebooting our OSD won't start, seems ceph can't connect to ceph cluster. Wondering why that might be?
Previous version before patching...
Under Memory at https://HOST:8006/pve-docs/chapter-qm.html#qm_memory it's written:
'When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB of RAM available to the host'
But I tent to find that when getting at around 60% memory usage on a hypervisor host it start to send...
We have all VM networks virtualized by vlan tagging and connected through a single OVS switch vmbr1. This switch is connected to a pair of bonded 2x10Gbs NICs cabled to a virtual chassis comprised of two Cisco Nexus 5672up. Sometimes during a reboot of a PVE 4.4 hypervisor node (properly during...
Running a 7 node 4.4 cluster with VM storage in LVMs from Vol groups with PVs from a shared iSCSI SAN.
Seems either our iSCSI devices or number of VM LVMs have caused slow OS probing during grub updating, causing risks that the SW watchdog sometimes firing a NMI during grub configuration as it...
Got a 4.4 production cluster attached to a multipathed iSCSI SAN from a HP MSA1040, divided the MSA into two disk groups A&B, then created 5+1 xiSCSI LUNs per MSA disk group and mapped those to PVs in four volume groups on each hypervisor node like this:
vgXbck LUNs are mapped to nfs server...
Whenever we need to upgrade the pve-kernel in our PVE 4.4 HA cluster, we find grub updating to be very slow (seem to be looking for other boot images on all known devices). In fact so slow that the HA SW watchdog sometimes fires a NMI, depending on at what stage this happens, it sometimes...
Wanted to roll on last weeks changes to PVE 4.3:
so migrated all VMs of first node and ran patch through apt-get upgrade.
SW watchdog then fired a NMI during patching of pve-cluster package and node rebooted, came up fine and we finished it with: dpkg --configure -a and another apt-get...
running our PVE HN attached to two Cisco nexus 5672 leaf switches, configured to support MTU 9000. So our Hypervisor Nodes all allow MTU 9000 on their physical NICs for iSCSI traffic eta, most our of VMs also allow MTU 9000 on their vNICs.
Two CentOS 6 VMs are used as a HAproxy load balancing...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.