Ram utilization shown in the UI is the Memory reserved by Linux Kernel for Allocation. Please login to system and see using free -m or top command to see actual utilzation
How did you check the write ?? if you have disk cache enabled, the disk performance would be different, additional OS also have cache parameters
Try with fio
Step 1 : pvecm expected 1
Step 2 : Copy /etc/pve/nodes/<failednode>/qemu-server/*.conf to /etc/pve/nodes/<active-node>/qemu-server/*.conf
failednode = Node hostname which has failed
Active-node= Node hostname where you want to move
For VM to move to another node ( ie node which is up) VM must use the shared storage. if VM is already on shared storage
do pvecm expected 1 and then migrate
No need of full reinstall
1. Edit /etc/hosts with new name appropriately
2. Edit /etc/hostname file new hostname
3. Set new hostname to existing session also ( hostnamectl set-hostname <newhostname>
4. Restart the service in following order "systemctl restart pvestad" "systemctl restart...
Can you share the following on Site A and Site B individually
ceph df
ceph osd lspools
ls -ltr /etc/pve/priv/ceph/
cat /etc/pve/storage.cfg
I just want to confirm onething thats it, I guess I know the issue
It should work like this
For my site A
cat /etc/pve/storage.cfg
rbd: vm-siteA
content images,rootdir
krbd 1
pool vm-siteA
for my siteB
rbd: vm-siteB
content images,rootdir
krbd 1
pool vm-siteB
Now suppose you want to mount cross, it should be like this
for Site A...
What is the difference between Crush Rule1
Rule 1
rule replicated_rule1 {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type chassis
step emit
}
Rule2
rule replicated_rule2 {
id 0
type replicated
min_size 1...
Yes follow the procedure of removal
Stop cluster, corosync and then perform del node using pvecm command and if ha is configured stop crm and lrm . Services needs to be stopped on node to be deleted and del node needs to be done after stopping services on active nodes
After that just update...
The warning message may indicate that the token timeout in /etc/corosync/corosync.conf might need to be increased if the message occurs frequently.
The attribute token_warning can be set in the totem section of the /etc/corosync/corosync.conf file.
According to this pool size is 77TB and if you use replication factor 2 with near full ratio of 85% available space would be 65.45 and if replication factor is 2, usa le space would be 32.75 TB approx and you have used 20TB of the available space, so according to this you must be left with...
Replication factor of 2/1 is not recommended for production. Nevertheless can you share output of
ceph df
You already have some data in pool so remaining data only it will show as avail space
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.