Note that benchmarks rarely translate to other setups, there are loads of variables.
For a stretched cluster: make sure you have sufficient bandwidth and low latency between the two endpoints and make sure you have a CRUSH map that accounts for...
You are looking at this the wrong way.
Whether you consider yourself "corporate" or not, as the custodian of your data systems your responsibility is to build solutions that work. If this particular solution is outside of your financial...
In fact I gave you a lot of knowledge and experience regarding Ceph and describing much more than a one node failure but the general rules so you can estimate N failures and dimension the cluster appropiately.
Never said such thing and dunno...
The results are from the following fio command executed directly on the NVMe's, ceph is not involved in this test...
fio --ioengine=libaio --filename=/dev/nvme... --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60...
This will be at least part of it since he started ( see first post ) with consumer drives without power-loss-protection.
I doubt however that you will get an answer from a four years old thread.
Please create a new thread and describe your...
Complete Guide: Setting Up ARM64 Virtual Machines on Proxmox for Raspberry Pi 5 Development
Overview
This guide documents how to create ARM64 virtual machines on Proxmox for Raspberry Pi development and testing. Based on real-world experience...
Du richtet dir den vserver als remote PBS ein und erstellst dort einen pull-job der sich die Daten vom PBS in deinen LAN zieht:
https://pbs.proxmox.com/docs/managing-remotes.html
Die Berechtigungen setzt du dann auf den PBSen so, dass eine...
Make sure you are mirroring both Debian and Proxmox repos, including Ceph one(s) if you plan to use it. Once mirror is done, you have to use the mirror [1] from PVE.
I find it easier to install Nginx on the POM host and add the appropiate apt...
Hey,
no, not if they would end up being on the same disk. And even if you had more than one disk, I'd really recommend letting the installer setup booting from ZFS. So the best way to achieve that is to create backups of your VMs/CTs, then...
Hi @707 welcome to the forum.
There is more to the error, adding those remaining words is critical.
If I were to guess - you did not disable Enterprise repo and did not enable the no-sub repo. Another common pitfall - bad DNS.
Blockbridge ...
Cluster size is immaterial. The only time you'd ever want to allow such a pool is if the data it houses is transient or of no value, because you will likely experience service/data loss.
That's brave!
I had run Ceph in my Homelab (!) for a year or so - and I went for 4/2. Yes, that might be paranoia level three, but 3/2 is the absolute minimum to sleep well.
Some other things I learned...
It does not. At least not under every circumstance. There's zero guarantee that you will get that second copy done to any OSD: if the primary OSD fails/breaks/can't reach/whatever and/or the secondary OSD is unable to write the data and before...
You can not simply "switch".
To use ZFS you would need to have new disks or erase the old ones. The data loss will be 100%.
Some hints pointing to a safe strategy, but for sure not a complete list:
look up / search for ZFS beginner friendly...
ProxmoxOfflineMirror should cover this, I havn't use it myself (yet) though: https://pom.proxmox.com/
Something like aptly or apt-mirror might work too but I havn't used them together with Proxmox products or Ceph yet
power supply
mechanical problems with the contact of the cables - remove/reconnect them on both ends
firmware of the motherboard - "why only now" is a weak argument
all drives: run SMART long selftest and check the results tomorrow
RAM is one of...
Not right now yes, but 24h+5m after their data isn't referenced in any other backups. This is a good thing since this saves lots of storage space if you have a lot of backups with overlapping data ( due to same operating system, same running...