Search results

  1. stefws

    howto mount a second cephfs of another ceph cluster

    We've got two external ceph clusters both working fine with pve5.2 for RBDs. From 1. cluster we mounted a CephFS using it to store vzdumps, but we also would like to mount a CephFS from the 2. cluster. 1. cluster is found in /etc/ceph/ceph.conf + ceph.client.admin.keyring 2. cluster is...
  2. stefws

    control Live mig network choice

    Got a testlab with two cluster networks, a private 3x1Gbs bonded and a public 100Mbs: I just want to make sure live migration always happens across ring0 network, thus I've assigned unqualified host names in /etc/hosts to ring0 NW: only live mig. don't pick the ring0 (privat) network now but...
  3. stefws

    PVE5.2 novnc Console not working

    Just updated from pve-no-subscription to pvetest repo but now I fail to get a console on VMs. Any clues/hints?
  4. stefws

    PVE5 - pvetest/pve-no-subscription repo

    Where might I find the pve-no-subscription repo for PVE5 for a testlab? Installed a node from the 5.2.1 iso file and thus trying this fails:
  5. stefws

    Fails to mount CephFS from Mimic Cluster

    I'm trying to mount a CephFS of a Mimic Cluster with a Luminous Client on a PVE 5.2 node, but are seeing this: Same mount works just fine on the Mimic Cluster CentOS7.5 nodes:
  6. stefws

    Migrate VM images between 2 Ceph Clusters

    Want to upgrade an old 3.4 testlab connected to a Hammer Ceph cluster (i know :) Plan is first to migrate VM images to a newly installed Ceph Mimic cluster, would it be possible to connect to both Ceph Cluster (eg. maybe by upgrading Ceph Client to +Jewel)?
  7. stefws

    in place upgrade 4.4 to 5.x

    Thinking it time to consider doing an upgrade from jessie 4.4 to latest 5.1 by following this 'in place upgrade' procedure and wondering if it could be an issue that we're using two corosync rings, HA clustering and shared storage from iSCSI array only? If anything were to go wrong, could we...
  8. stefws

    PVE 3.4 - pve-kernel-2.6.32-48-pve vs openvswitch 2.3.2

    Just attempted to patch an older testlab PVE 3.4 to latest patch levels. Found a newer kernel pve-kernel-2.6.32-48-pve only when booting on this our openvswitch looked fine but could get traffic in/out through a bonded NIC plugged into the single vmbr1 ovs and thus no access to the ceph cluster...
  9. stefws

    guest on kernel 4.14-12 fails to show NF conntrack

    If we boot a VM/guest on kernel 4.14.12 with KPTI enabled, it'll not longer show netfilter stats as on earlier kernels (4.13.4 and less), eg. always returning zero value by: Can really find a good reason on the 'Net'. Anyone knows why?
  10. stefws

    NF tunning not applied at boot time

    Have this config file on hypervisor/host nodes: But after boot still find default values and wonder why:
  11. stefws

    PVE 4.4 - VM crashes after live migration if virt-net is highly loaded

    Last two live migrations of a VM running relative much network traffic seemed to crash the VM on target host at resume in virt-net driver. See attached SD from target VM console.
  12. stefws

    patched from 3.4.15 to 3.4.16, now ceph 0.94.9 fails to start

    Got an older 7x node 3.4 testlab (running Ceph Hammer 0.94.9 on 4x of the nodes and only VMs on 3x nodes), which we wanted to patch up today, but after rebooting our OSD won't start, seems ceph can't connect to ceph cluster. Wondering why that might be? Previous version before patching...
  13. stefws

    4.4 and memory usage

    Under Memory at https://HOST:8006/pve-docs/chapter-qm.html#qm_memory it's written: 'When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB of RAM available to the host' But I tent to find that when getting at around 60% memory usage on a hypervisor host it start to send...
  14. stefws

    VM networks are temporary unavailable during Hypervisor reboot

    We have all VM networks virtualized by vlan tagging and connected through a single OVS switch vmbr1. This switch is connected to a pair of bonded 2x10Gbs NICs cabled to a virtual chassis comprised of two Cisco Nexus 5672up. Sometimes during a reboot of a PVE 4.4 hypervisor node (properly during...
  15. stefws

    SW watchdog sometimes fires NMIs while patching

    Running a 7 node 4.4 cluster with VM storage in LVMs from Vol groups with PVs from a shared iSCSI SAN. Seems either our iSCSI devices or number of VM LVMs have caused slow OS probing during grub updating, causing risks that the SW watchdog sometimes firing a NMI during grub configuration as it...
  16. stefws

    iSCSI LUNs or VM image LVMs slows grub during updating PVE

    Got a 4.4 production cluster attached to a multipathed iSCSI SAN from a HP MSA1040, divided the MSA into two disk groups A&B, then created 5+1 xiSCSI LUNs per MSA disk group and mapped those to PVs in four volume groups on each hypervisor node like this: vgXbck LUNs are mapped to nfs server...
  17. stefws

    Disable LVM/VGs on iSCSI PVs during dist-upgrades

    Whenever we need to upgrade the pve-kernel in our PVE 4.4 HA cluster, we find grub updating to be very slow (seem to be looking for other boot images on all known devices). In fact so slow that the HA SW watchdog sometimes fires a NMI, depending on at what stage this happens, it sometimes...
  18. stefws

    Debian minor issue?

    latest enterprise patches results in: Due to: Previous version has:
  19. stefws

    Patching PVE 4.3 on one node made hole cluster reboot

    Wanted to roll on last weeks changes to PVE 4.3: so migrated all VMs of first node and ran patch through apt-get upgrade. SW watchdog then fired a NMI during patching of pve-cluster package and node rebooted, came up fine and we finished it with: dpkg --configure -a and another apt-get...
  20. stefws

    PMTUD or large MTU size

    running our PVE HN attached to two Cisco nexus 5672 leaf switches, configured to support MTU 9000. So our Hypervisor Nodes all allow MTU 9000 on their physical NICs for iSCSI traffic eta, most our of VMs also allow MTU 9000 on their vNICs. Two CentOS 6 VMs are used as a HAproxy load balancing...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!