For non-root limited user I would like to limit view to pool only in pve cluster. Is there any way to disable datacenter view or/and set another view (pool view) for given user as default
The idea behind is to hide number of cluster nodes and their names
Thanks in advance
I had 3 nodes cluster with CEPH installed (several OSDs on each node) in network 10.63.210.0/24 for PVE (1Gb) and 10.10.10.0/24 for CEPH (10Gbe).
It was OK until I added 4-th node to PVE cluster only from another network 10.63.200.0/24 (no CEPH/OSD on that node). PVE cluster is happy, CEPH...
I made one of my nodes NFS server (10Gb, NFS exported from ZFS storage with SSD disks)
I connected 12 hosts to that Node (NFS storage) with default NFS settings (v.4.2)
nfs: DS-254-NFS-B
export /tank254/datastore/slave/nfs64k/id254
path /mnt/pve/DS-254-NFS-B
server...
Just wondering, is there any reason to increase VM machine version to the higest one after major PVE upgrade?
In terms of VM stability/perfomance
Thanks in advance,
With respect to blog at ceph.io
https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/
Which memory allocator and libRBD are used in Proxmox?
Are the suggested optimizations in article above suitable for PVE+CEPH optimization?
Any plans to integrate ceph replication (RBD mirroring) functionality into GUI? (with both snapshot and journaling modes)
Current wiki tutorial (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) covers only journaling one and not fully suitable for recent Pacific ceph distro(
In PVE Wiki (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) written
Could anyone advice how to extend one-way mirroring to two-ways with respect to original PVE Wiki howto?
Is it enough to install rbd-mirror in master (source) ? If so is it enough to install on one node in source CEPH...
I'm facing an issue with creating ZFS pool with dm-mappers (clean 6.3 PVE)
I have HP gen8 server with dual port HBA connected with two SAS cables to HP D3700 and dual port SAS SSD disks SAMSUNG 1649a
I've installed multipath-tools and changed multiapth.conf accordantly ...
After an upgrade to PVE 6 and CEPH to 14.2.4 I enabled pool mirroring to independent node (following PVE wiki)
From that time my pool usage is growing up constantly even-though no VM disk changes are made
Could anybody help to sort out where my space is flowing out?
Pool usage size is going to...
According to CEPH docs (https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#id1) several public nets could be defined (useful in case of rdb mirroring when slave CEPH cluster is located in separate location or/and monitors need to be created on different network...
After an update from 5.x to 6.x one CEPH monitors became "ghost"
With status "stopped" and address "unknown"
It can be neither run, created or deleted with errors as below:
create: monitor address '10.10.10.104' already in use (500 )
destroy : no such monitor id 'pve-node4' (500)
I deleted...
Could anyone explain why do corosync (KNET) choose best link with the highest priority instead of the lowest one (as written in PVE wiki)?
Very confused with corosync3 indeed...
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: amarao-cluster
config_version: 20
interface...
I've noticed that after installing PVE 6.x ckuster with 10Gb net for intercluster and storage (NFS) communications cluster nodes randomly hangs - still available through ethernet (1Gbe) nework but NOT accesible via main 10Gbe, so neither cluster nor storage are availible
Yesterday it happened...
After a successful upgrade from PVE 5 to PVE 6 with Ceph the warning message "Legacy BlueStore stats reporting detected on ..." appears on Ceph monitoring panel
Have I missed something during an upgrade or it's an expected behavior?
Thanks in advance
After and upgrade to PVE 5.4 I'm facing a problem with corosync second ring functionality
corosync.conf
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: pve-node1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.10.10.101
ring1_addr: 10.71.200.101
}
node {...
One of our VMs randomly stops (hugs) during console connections. Could anyone hint what could cause such behavior?
As seen on the snapshot listed below VM gets STOP command on console connect and than trying to connect to vncproxy and fails
What could rise vm STOP on console connect...
Does Proxmox (GUI and pveceph) on OSD creation take into consideration that SAS disk could have (and more likely does in correct server configuration) multipath enables/configured (with dm-multipath)?
Very exited with Ceph integration in PVE. However there is one point I would be happy to clarify (found nothing with "forum search" so far) - why CephFS storage in PVE is limited to backups,images and templates only? Well I know I can mount folder located in cephfs mount point but very...
With default PVE settings i can not assign more than 44 Gb of RAM
with this workaround (https://forum.proxmox.com/threads/hotplug-memory-limits-total-memory-to-44gb.30991/)
/etc/modprobe.d/vhost.conf
with content:
options vhost max_mem_regions=509
I'm able to setup more RAM however I'm...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.