Search results

  1. Ceph replication setup via GUI ?

    Any plans to integrate ceph replication (RBD mirroring) functionality into GUI? (with both snapshot and journaling modes) Current wiki tutorial (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) covers only journaling one and not fully suitable for recent Pacific ceph distro(
  2. Two-ways mirroring CEPH cluster how to?

    In PVE Wiki (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) written Could anyone advice how to extend one-way mirroring to two-ways with respect to original PVE Wiki howto? Is it enough to install rbd-mirror in master (source) ? If so is it enough to install on one node in source CEPH...
  3. [SOLVED] "One of the devices is part of an active md or lvm device" error on ZFS pool creation (dm-multipath)

    I'm facing an issue with creating ZFS pool with dm-mappers (clean 6.3 PVE) I have HP gen8 server with dual port HBA connected with two SAS cables to HP D3700 and dual port SAS SSD disks SAMSUNG 1649a I've installed multipath-tools and changed multiapth.conf accordantly ...
  4. After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    After an upgrade to PVE 6 and CEPH to 14.2.4 I enabled pool mirroring to independent node (following PVE wiki) From that time my pool usage is growing up constantly even-though no VM disk changes are made Could anybody help to sort out where my space is flowing out? Pool usage size is going to...
  5. Web UI cannot create CEPH monitor when multiple pulbic nets are defined

    According to CEPH docs (https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#id1) several public nets could be defined (useful in case of rdb mirroring when slave CEPH cluster is located in separate location or/and monitors need to be created on different network...
  6. [SOLVED] Ghost monitor in CEPH cluster

    After an update from 5.x to 6.x one CEPH monitors became "ghost" With status "stopped" and address "unknown" It can be neither run, created or deleted with errors as below: create: monitor address '10.10.10.104' already in use (500 ) destroy : no such monitor id 'pve-node4' (500) I deleted...
  7. [SOLVED] Why do KNET chose ring with higher priority instead of lower one (as said in manual?)

    Could anyone explain why do corosync (KNET) choose best link with the highest priority instead of the lowest one (as written in PVE wiki)? Very confused with corosync3 indeed... quorum { provider: corosync_votequorum } totem { cluster_name: amarao-cluster config_version: 20 interface...
  8. PVE 6 cluster nodes randomly hangs (10gbe network down)

    I've noticed that after installing PVE 6.x ckuster with 10Gb net for intercluster and storage (NFS) communications cluster nodes randomly hangs - still available through ethernet (1Gbe) nework but NOT accesible via main 10Gbe, so neither cluster nor storage are availible Yesterday it happened...
  9. [SOLVED] Warning after sucessfull upgrade to PVE 6.x + Ceph Nautilus

    After a successful upgrade from PVE 5 to PVE 6 with Ceph the warning message "Legacy BlueStore stats reporting detected on ..." appears on Ceph monitoring panel Have I missed something during an upgrade or it's an expected behavior? Thanks in advance
  10. After upgrade to 5.4 redundant corosync ring does not work as expected

    After and upgrade to PVE 5.4 I'm facing a problem with corosync second ring functionality corosync.conf logging { debug: off to_syslog: yes } nodelist { node { name: pve-node1 nodeid: 1 quorum_votes: 1 ring0_addr: 10.10.10.101 ring1_addr: 10.71.200.101 } node {...
  11. [SOLVED] Windows 2012R2 guest randomly stops on PVE console connection

    One of our VMs randomly stops (hugs) during console connections. Could anyone hint what could cause such behavior? As seen on the snapshot listed below VM gets STOP command on console connect and than trying to connect to vncproxy and fails What could rise vm STOP on console connect...
  12. Ceph OSD creation and SAS multipath

    Does Proxmox (GUI and pveceph) on OSD creation take into consideration that SAS disk could have (and more likely does in correct server configuration) multipath enables/configured (with dm-multipath)?
  13. CephFS storage limitation?

    Very exited with Ceph integration in PVE. However there is one point I would be happy to clarify (found nothing with "forum search" so far) - why CephFS storage in PVE is limited to backups,images and templates only? Well I know I can mount folder located in cephfs mount point but very...
  14. Hotplug memory limits total memory to 44GB

    With default PVE settings i can not assign more than 44 Gb of RAM with this workaround (https://forum.proxmox.com/threads/hotplug-memory-limits-total-memory-to-44gb.30991/) /etc/modprobe.d/vhost.conf with content: options vhost max_mem_regions=509 I'm able to setup more RAM however I'm...
  15. Compiling silk-guardian with PVE kernel

    Can anyone help with compiling silk-guardian with latest PVE kernel? Would very appreciate if someone share silk.ko driver. I have no idea how to sort out following error root@pve:/tmp/silk-guard# export KCPPFLAGS="-fno-pie" root@pve:/tmp/silk-guard# export CPPFLAGS="$KCPPFLAGS"...
  16. [SOLVED] Problem with installing Proxmox-VE on Debian

    Debian 9.2 (netinstall) Followed Proxmox Wiki (https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch) On step: apt install proxmox-ve postfix open-iscsi Setting up pve-manager (5.2-1) ... Job for pvestatd.service failed because the control process exited with error code. See...
  17. Redundant totem ring marks faulty

    I've a cluster with 6 nodes and two separated networks (Ethernet via Cisco switches and Ethenet over Infiniband (Mellanox)) Following the wiki I've set up two totem rings. root@storageB:/dev/disk# cat /etc/pve/corosync.conf logging { debug: off to_syslog: yes } nodelist { node {...
  18. Upgrade ZFS on Linux to 0.7.x

    Any chance to get an update for ZFS on Linux to at least 0.7.1 in nearest future? For a long time I'm waiting for zfs receive "-x" flag to filter non-exist attributes in order to be able to migrate from external storage FreeNas (NFS on ZFS) to local ZFS one
  19. Enable RDMA NFS support on startup

    Hello, All I'm trying to setup a Proxmox cluster where one of the nodes is used for quorum as well as backup NFS server and use RDMA protocol (Infiniband) Thanks to this manual: https://docs.oracle.com/cd/E52668_01/E60671/html/uek3_techpreview-NFSoRDMA.html I've managed to successfully setup...
  20. Change ZFS snapshot name using pve-zsync

    Is there any way to change ZFS snapshot name format in order to be able to sync VMs/pools with external ZFS (FreeNas box for example) In particular FreeNAS does not support ":" and/or "-" in snapshot name: root@pve02A:/etc# pve-zsync create --source tank --dest...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!