All faced the same issue a few days ago. With reinstalled node.
After executing @IDemoNI solution
systemctl restart pvestatd.service
Then status went from unknown to available :) :cool:
Hi @aaron
Already tried Directly Write into disk - also found a qcow2 image of recently backed vm around 60GB and other unharmful files and small FS's
wrought it directly to one of the ssd disks and it didnt crashed or misbehaved.
im probably guessing that maybe my controller is not best...
Hello all,
in the past 3 years we are running multi-node ce[h cluster which the controller that used for HBA mode is P440ar.
Recantely we decided to have Hyper-converged CEPH with 2 pools, means to have another pool based on SSD
ones the starting moving a vm on SSD CEPH pool, all SSD OSD's...
OK we did the 2 nodes only SSD thing. and yet no good results.
I have downgrade pool config to be 2/2 replicates since i'm having only 2 nodes for it and have all pgs reallocated correctly.
The results are:
Moved test-vm (very light vm ) to ssd disk pool
After 10 seconds - BOOM! 3 ssd...
Hi aaron,
Thank you very much for replying.
We saw IO error but only when controller has disconnected. So couldn't point on it directly.
Situation is:
Finished inserting all SSD osd's
then lets say on node pve03 i have changed vm storage from HDD ceph pool to SSD ceph pool
After 5...
Hello all,
Background:
We are having 2 sites/DC's with a cluster of 9 nodes each site/DC.
The way Ceph is build:
4 Disks of 2.4 T and 2 SSD disks of 200GB for WAL/journalist.
each Server is HPE DL380 GEN9
Controllers:
P840 for OS
P440ar for CEPH (HBA mode)
Recently we purchased 10...
im deleting elastic indexes once in a while - it detects when it grows but not when deleted
so when i come to delete ill just run fstrim manually on the same opportunity
Thanx
found the issue:
Since former system Linux admin was mounting second disk directly on dev and not on partition.
discard option wasn't able to find dev directory via OS
I had to trim all FS on vm OS for it to take affect. Now its ok, and reflect vm the right data percentage disk on local...
those VM's guest OS are Centos7 so it should be able to use Trim
Server controllers - for ceph using p440ar (uses trim), local storages and OS controller p440 (support and uses trim as well)
My vm is uses 2 disks - operating system disk space with 60 GB that located on CEPH
and Second disks...
Hello all,
I have 2 vm's with LOCAL-Storage on 2 proxmox nodes there are in cluster of 5.
Proxmox ve-6.4.8
My issue is:
The mentioned VM's are Elasticsearch data-nodes. feeded with the same data
Local Storage volume is 7T and presented as Directory Storage on each node they are on.
When...
Since we are not able to reproduce it in a test environment and the second issue is that we cannot create something at the same scale as we have in production
production data scale is 29T in used out of 70T --> 29T/70T
Do you have any suggestions of the way we would perform this as most safe...
aaron thank you very much.
As you understood from above that we are on production environment,
Just to make sure when I reassign the rule of the current pool to the replicate_hdd, should i expect a downtime or data corruption due to this shifting?
Thanks again in advance for your help
Hi aaron,
Thank you very much!!! for your answers and prompt reply.
Highly Appreciated.
Ill might be repeating again on some staff we have both mentioned here in previous replies, But it just to be sure that im 100% percent what i need to do (Very sensitive production area so i cannot...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.