Hi,
ich habe aus versehen ~4TB in einem Container gelöscht.
Jetzt stelle ich einen Snapshot wieder her. Das läuft schon 1d 11h. Gibt es eine Möglichkeit rauszufinden wie der Fortschritt ist?
Verwende ZFS als Dateisystem.
Danke für Hilfe.
ceph tell 'osd.*' injectargs '--osd-max-backfills 16'
ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
I waited several hours, at least a whole night for something to happen.
There is a replica 3 pool and a ec pool on the OSDs.
Ceph distributes across hosts.
10G-Network
Running 3 days...
Hi,
I have a PVE cluster with 7 hosts of which each host has 2 16tb HDDs.
The HDDs all use NVMEs as DB discs.
There are no running VMs on the HDDs. They are only used as cold storage.
A few days ago I had to swap 2 of these HDDs on PVE1. And since I already had the server open, I added two...
Looks like I found the problem here (german).
The cluster is now rebalancing and the PG number is much higher.
Thanks for the push in the right direction.
Thanks, it looks like I need to enable the pg_autoscaler module first:
How can i do that? The pool was created with "PG Autoscale Mode on":
Edit: or is the module automatically activated when I set a target ratio for a pool?
Edit2: Modul is enabled:
root@pve1:~# ceph mgr module ls | grep...
Hello together,
I'm just experimenting with CEPH and wondering why the OSDs are so unevenly allocated.
There are 7 PVE servers each with a 2TB and a 4TB NVMe.
I have ec 4+3 with the hosts configured as failure domain.
Does anyone have any idea if this is normal or if I should try to distribute...
I installed some Optane P4801X I had lying around and now use them as DB/WOL disks for the spinner OSDs.
Now i have write speeds that are much much better. Thanks for the little push in the right direction!
Hi,
I am currently experimenting with Ceph on a PVE cluster with 7 hosts.
Each of the hosts has two OSDs as 16TB SATA hard drives. Using dd to the HDDs I can write with speeds up to 270MB/s.
The storage and client network are both connected with 10GBit/s, which I have also tested with iperf3.
I...
Sorry, I edited my last post with additional info.
Usually I restart the container via Proxmox with "Reboot".
SSH is enabled, yes.
root@foundry:~# systemctl status sshd
* ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset...
Well, that's right.
But when I reboot the container, shouldn't the SSH settings I configured in the configuration be used?
This is not the case until I restart the SSH server after the container restart.
How to reproduce:
Start latest LXC container with Debian 11
Connect to container with SSH...
Same here.
The problem for me is, that all changes in the container's /etc/ssh/sshd_config are completely ignored unless I restart the SSH server by hand.
~# pct config 108
arch: amd64
cores: 4
features: nesting=1
hostname: foundry
memory: 2048
nameserver: 2620:fe::fe
net0...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.