Hello Fabian,
I'm sorry, indeed it was on a shared storage, and I moved it to local and forgot to remove the HA.
here is the pveversion -v :
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3...
Hi everyone,
I had a node with a VM that had a local storage, which had a failure and shut down for some time.
Due to the HA, it was migrated to another node, and ended up in failed state due to the local storage not migrated.
Now the node is up again.
I've put the HA state to disabled for...
Hi, me again.
Is the patch available in a certain pve version ?
I'm in 6.3-3 and the issue is still present.
EDIT: Looking at the code it seems it is only for mon, and not osd ?
root@proxmox2:~# ls -la /etc/ceph/ceph.conf
lrwxrwxrwx 1 root root 18 Nov 19 15:13 /etc/ceph/ceph.conf -> /etc/pve/ceph.conf
Thanks for mentionning the network, it was in the same network before, it needs to be allowed in :
TCP 6800:7100 for OSD
TCP 6789
TCP 3300 for monitor
This is working...
Thank Alwin, however this did not work.
I also wanted to remove all ceph* packages to start over, but then it wanted to remove proxmox-ve and everything :(
Hi Team!
I reconfigured a server from scratch.
Then installed the ceph package but did cancel the configuration after the install, so it could use the setup of the already configured setup.
Then made it join the cluster.
now I cannot configure it with the GUI, and have the got timeout (500)'...
network is not saturated, first spike is with hdd (around 100M, second is with sdd around 50M), max seen on interface is 400Mb during VM migrations
ceph osd dump
osd.0 up in weight 1 up_from 31272 up_thru 32880 down_at 31271 last_clean_interval [30448,31269)...
I tried multiple times and on different days, I have always the same results.
yes the crushrules are different, the difference is the hdd/sdd class
rule replicated_hdd {
id 1
type replicated
min_size 2
max_size 10
step take default class hdd
step chooseleaf firstn 0...
My understanding is not clear on why the HDD perform much faster than the SSD, the hddpool is almost not used.
I would expect around the same speed with both disks
proxmox1 is a NUC8i3, proxmox2 and proxmox3 are N54L, proxmox4 and proxmox5 are dell 8200 SFF i7.
the SSD are all 1To.
Those pc are all connected to the same Gigabit switch and up to date.
What other info would you like ?
Hello !
I have this setup of proxmox and ceph :
root@proxmox1:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 18.19080 root default
-7 0.90970 host proxmox1
5 ssd 0.90970 osd.5...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.