Yes, I am/was keen to get some of them too.
That's really a bummer :-(
I wanted to put two to four OSD in each of it, the actual constrains should allow for four. Now look at https://docs.ceph.com/en/mimic/start/hardware-recommendations/#ram ...
No matter whether developers decided on bits or on Bytes, they should place on the graph the unit they chose: "b/s" or "B/s" (or "p"(p like per) for "/" )
Not typing the physical unit on a graph automatically brought down the test grade at my...
Hi friend, I can tell you that I have been studying this topic for months, between zfs and ceph, the problem is that in my case I have many vm that do different jobs, about 50 vm on 3 nodes, I started with HDD disks with zfs raidz but I had many...
ZFS is known to have bad performance with HDD's. While it may work, often it is suggested to add 2 smaller SSD's as special devices to your ZFS HDD pool to help with performance, even better use SSD's only. ZFS being a COW filesystem needs more...
Well..., ZFS ARC does shrink if required. But it does so slowly. (Edit: ...and if it is allowed to by "zfs_arc_min" being lower than _max)
Too slow to be fast enough if one VM (or any other process) requests too much memory at once.
RAM is the...
You are running out of RAM
Out of memory: Killed process 1925 (kvm)
Maybe you should reduce vour VM's memory and check / limit ZFS arc cache
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_limit_memory_usage
No. That script looks fine - and I had already "Like"d it. :-)
I am just a bit reluctant. While I really like Debian and I know enough to do a lot of optimization/scripting I usually hesitate to add another...
https://github.com/kneutron/ansitest/tree/master/proxmox
Look into the bkpcrit script, point it to separate disk / NAS, run it nightly in cron.
You can try Relax-And-Recover (REAR) or I also have custom fsarchiver scripts in that repo to do...
google high availability storage. There are many options available.
resouces arent relevant in and of themselves. Storage solutions can be very low to very high powered, with different class of disk, tiering and caching methods, and with varying...
Vermutlich hat die Netzwerke Karte einen anderen Namen als im alten PC.
Display anstecken, anmelden, Output von "ip addr" kontrollueren und "/etc/network/interfaces" entsprechend anpassen
Thanks for your reply. I think there is a small misunderstanding here.
My setup is as follows:
PBS server is running locally with HDD storage (so IOPS is not the bottleneck).
The problem occurs only because the remote S3-compatible backend...
It can run on other cluster members. The Ceph storage layer still exists on 3/4. If the server is operating it may be able to migrate (?) otherwise HA will start it on another server.
you may want to read through...
The PBS is local and just the storage is remote in that slow cloud?
I would suspect that having a working and usable PBS with just a handful of IOPS is... impossible.
Remember that lokal HDDs with ~150 IOPS are considered too slow (for...
Just to be sure, did you see this? --> https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/
From my (possibly wrong) understanding Gluster is deprecated, at least Redhat states...
!!! 1.Hint: This Thread is not for Ceph fan boys. Go away! !!!!
Occasion:
In the Thread zfs-over-iscsi HA Storage solution my one (the word "one" is always related to a human being, not a matter or person or number) mentioned that...
naja ich bin auch nicht mehr so jung xD aber arbeite seit ca 35 Jahren schon mit Linux, Windows und in der IT daher hab ich da schon einiges an Erfahrung aber habe mir auch sehr viel durch ausprobieren selber bei gebracht und durch viel lesen...
Excuse my ignorance, but does this not create a single point of failure for the whole cluster?
To automatically force VM migration to an unaffected node.