So we always use 3/2 replication however we noticed that data was running out and we changed to 2/2 to keep it going and did not act fast enough.
In South Africa we have shortage of Enterprise SSDs so we are waiting on new stock we ordered via Amazon. But we found now that 2 disks are failing...
Hi guys
I would like to somehow easily block IPs in PVE firewall the following lists as per: https://forum.proxmox.com/threads/automated-proxmox-firewall-management.22813/
OpenBL Base
Spamhaus DROP and EDROP
Blocklist.de STRONGIPS
ISC DSHIELD
Emerging Threats CINS
Are there any scripts...
So thinking of installing suricata but just want to check if this is correct:
apt-get install suricata
modprobe nfnetlink_queue
nano /etc/pve/firewall/132.fw
Add below to the file above
[OPTIONS]
ips: 1
ips_queues: 0
Now go to proxmox a be sure on Datacenter the firewall is enabled which it...
Hi
So we have been using the crush rule "replicated rule" for ssds only.
I want to now add hdds to each server so we have some "slow" storage with tons of diskspace.
Are the steps as per below:
1. Run the following on any node: ceph osd crush rule create-replicated replicated_hdd default...
Hi guys
Not ready to shutdown whole Ceph cluster but realised I have debugging on which is default with proxmox install and ceph.
So would like to turn off debugging and would really not want to reboot on live cluster.
Is it easier just do the following and quote from another google response...
Hi guys
We have Microns 5210 drives in ceph.
I read this today:
https://yourcmc.ru/wiki/index.php?title=Ceph_performance&mobileaction=toggle_view_desktop#Drive_cache_is_slowing_you_down
It states we must disable write cache?
Should I do this on all our drives.
Can we do it with a live Ceph...
Hi
I have never seen this before. usually I get disk failed completely but this is new. Please advise if this disk has failed or not. I have another 11 of these disks and they dont give these results only this particular one.
We typically only have KVM VMs in proxmox and currently use Krbd. I was informed by my colleague librbd is better for qemu kvm worloads. We mainly have vms hosting websites and sql.
He stated there is major improvements on librbd recently that make it better? something about it being rewritten...
so I created a new lxc container and set cores to 2 and cpu limit to 2.
The server itself has 64GB memory and 24 cores (12 core processors x 2 sockets)
However when this server is heavily tested and load goes up in top we see this on the node:
top - 08:34:49 up 10:55, 3 users, load average...
We have cpanel centos 7 servers on proxmox 6 using LXC
Are there any known issues we should be aware of as we need to upgrade around 80 LXC containers as systemd is outdated on these and they using centos 7.
Anyone have experience or aware of any known issues we should be aware of.
Planning...
Hi guys
Can we set the time for example during business hours like say from 7am to 5pm for garbage collection and pruning to start rather than it running during the night at the same time as backups run?
It seems to slow the backup server somewhat.
UPDATE: NEvermind. Found it.
Thanks
Trying to run the following:
ceph daemon osd.6 perf
Can't get admin socket path: unable to get conf option admin_socket for osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid types are: auth, mon, osd, mds, mgr, client\n"
Not sure what is wrong.
ceph.conf is as per below...
Hi
Is it possible to have ceph compression work on existing pools? I think since I only enable it now compression is only working with new data. How to compress existing data. I am using aggressive mode with lz4
Hi guys
We wanted to move to 2/2 for a bit while we wait for our new SSDs to arrive as we have limited storage space now in one cluster. However when doing so and moving from 3/2 to 2/2 we notice that all our VMs pause or become "read only" when Ceph is rebalancing if a disk is taken out and a...
I think I may have found something.
I had an issue with diskspace and hence changed from replication x3 to x2 knowing the possible risks.
However it was to be temporary while I add more OSDs.
But when I add more OSDs to new servers now I am noticing very high IO wait and servers "freeze".
I...
I am reading on some posts that diskspace of disks are measure based on the smallest OSD disk in cluster?
So for example if we have the below on each node on 6 nodes.
2 x 2TB
1 x 500GB
Are we saying diskspace is lost? due to the 500GB.
Should we rather just remove the 500GB. We just had...
Hey guys
I have a question I forgot to test when we did our testing phase.
If we stop and out a disk but then realised we did the wrong disk. Can we just bring it back in a gain without "destroying the data" first on it?
Any risk in doing so. Will ceph just use the data on the OSD and just...
Hi guys
We have a stable good working ceph cluster with one ceph pool where all data is on. We have a few VMs running currently on that pool.
I noticed there was an option called KBRD and noticed that on some posts on forums it states that performance can be increased in enabled KBRD on the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.