I am not a storage guru, sorry. But yeah, NFS is simple, old+stable and shared. A "simple" setup will create a SPOF though!
For PVE I am utilizing ZFS with replication. It gives me the performance of local drives, does not introduce networking...
I did read that a few times - you sent it to me before haha. That's a nice write up and easy read.
I'm just trying to get shared storage. An iSCSI or NFS share would be amazing.
Could you serve a share using those and mount it in Datacenter...
This has been discussed dozens of times already. That setup with those backup sizes won't ever perform well. You are using the two things that kill PBS GC performance: network shares and HDD only datastore. Your datastore size requires proper...
Yes, I am/was keen to get some of them too.
That's really a bummer :-(
I wanted to put two to four OSD in each of it, the actual constrains should allow for four. Now look at https://docs.ceph.com/en/mimic/start/hardware-recommendations/#ram ...
No matter whether developers decided on bits or on Bytes, they should place on the graph the unit they chose: "b/s" or "B/s" (or "p"(p like per) for "/" )
Not typing the physical unit on a graph automatically brought down the test grade at my...
Hi friend, I can tell you that I have been studying this topic for months, between zfs and ceph, the problem is that in my case I have many vm that do different jobs, about 50 vm on 3 nodes, I started with HDD disks with zfs raidz but I had many...
ZFS is known to have bad performance with HDD's. While it may work, often it is suggested to add 2 smaller SSD's as special devices to your ZFS HDD pool to help with performance, even better use SSD's only. ZFS being a COW filesystem needs more...
Well..., ZFS ARC does shrink if required. But it does so slowly. (Edit: ...and if it is allowed to by "zfs_arc_min" being lower than _max)
Too slow to be fast enough if one VM (or any other process) requests too much memory at once.
RAM is the...
You are running out of RAM
Out of memory: Killed process 1925 (kvm)
Maybe you should reduce vour VM's memory and check / limit ZFS arc cache
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_limit_memory_usage
No. That script looks fine - and I had already "Like"d it. :-)
I am just a bit reluctant. While I really like Debian and I know enough to do a lot of optimization/scripting I usually hesitate to add another...
https://github.com/kneutron/ansitest/tree/master/proxmox
Look into the bkpcrit script, point it to separate disk / NAS, run it nightly in cron.
You can try Relax-And-Recover (REAR) or I also have custom fsarchiver scripts in that repo to do...
google high availability storage. There are many options available.
resouces arent relevant in and of themselves. Storage solutions can be very low to very high powered, with different class of disk, tiering and caching methods, and with varying...
Vermutlich hat die Netzwerke Karte einen anderen Namen als im alten PC.
Display anstecken, anmelden, Output von "ip addr" kontrollueren und "/etc/network/interfaces" entsprechend anpassen
Thanks for your reply. I think there is a small misunderstanding here.
My setup is as follows:
PBS server is running locally with HDD storage (so IOPS is not the bottleneck).
The problem occurs only because the remote S3-compatible backend...
It can run on other cluster members. The Ceph storage layer still exists on 3/4. If the server is operating it may be able to migrate (?) otherwise HA will start it on another server.
you may want to read through...
The PBS is local and just the storage is remote in that slow cloud?
I would suspect that having a working and usable PBS with just a handful of IOPS is... impossible.
Remember that lokal HDDs with ~150 IOPS are considered too slow (for...