Had these in my /etc/apt/sources.list
deb http://ftp.se.debian.org/debian bookworm main contrib
deb http://ftp.se.debian.org/debian bookworm-updates main contrib
Got
root@pve22-31-cephzfs:~# apt-get install -f net-tools
Reading package lists... Done
Building dependency tree... Done
Reading...
Updated the guide a bit:
Ceph on ZFS
Rationale:
Wanting to get the most of my Samsung PM983 Enterprise NVMEs, and more speed in ceph I wanted to
test ceph on top of a non-raidz ZFS to make use of the ARC, SLOG and L2ARC
Prerequisites:
Proxmox (or Debian)
Working ceph installation (MON, MGR)...
I am adding this to my system right now to try out the CEPH performance with ZFS as the underlying system and a enterprise SSD to get the special/slog and cache AND the ZFS ARC to see if it improves the ceph performance and was wondering if you can see an improvement there.
MIght be a longshot, but I have had this message popping out on me, turned out to be RAM/CPU channel that was broken so had to replace motherboard.
Saying that because the only thing that the last download touches is RAM.
Hi Adriano,
I have invested in 10gb networking, 3 hosts (not even your 7 ones),Samsung NVME PLPs, bigger HDDs, NVME DBWAL, Cache pools etc and wound up getting disappointed with the numbers at 100mb/sec read and 30mb/sec write cold/warm data with my expensive 10gb networking barely breaking a...
Hi!
In my constant chase to get out of my PLP NVMes and 10gb network I came to realise that ceph wasn't cutting it when it came to performance while reliability and having a proxmox GUI absolutely rocks. (Thats not even mentioning being able to SEE the files in Gluster even with a broken MDS/MON...
Check ceph osd df and look at the standard deviation, should be as close to zero as possible. This indicates whether your disks are evenly distributed. Check how many PGs you have.
Also check this out:
https://www.youtube.com/watch?v=LlLLJxNcVOY
I had a similar problem on 2 of my nodes failing to download packages and updating them https://forum.proxmox.com/threads/lost-both-1gb-and-10gb-network-after-7-0-upgrade.95114/
I'd say this would indicate an issue with that apt update ?
The maximum I have ever heard being mentioned is 7 monitors, the reason of not being higher than that was with the update speed might slow down or in rare cases corrupt the small monitor db..
I would first ensure the network hasnt changed MTUs or so. Then you should be able to move the monitor db on your other broken nodes, get qurorm and wait for them to sync with the primary working one.
This is absolutely awesome, thank you sir!
Curious if you are able to get the RPi to mount ceph mounts/rbds ? Thinking of setting up LXCs/dockers for RPi in a proxmox cluster.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.