Hi,
i want to use acme for Certificates from Sectigo. But I'm missing the options for the external binding as the registration needs both an eab-kid and eab-hmac-key parameter.
With certbot these are the --eab-kid and --eab-hmac-key with the appropriate values from our Sectigo Account
This would be interesting for me too.
We have several applications which would need the link speed inside the VM:
-> Samba ->it would use multiple Streams only with link speeds of 10Gbit/s and up, and the number of streams is dependent of the Link speed
Same for Lustre and Irods.
As it is...
It is not a myth, just don't use a Raid Controller or anything similar below a ZFS Pool. ZFS likes to see all disks directly. Some features even require this, for example the protection against silent bitrot.
Just use adequat SAS oder NVME HBA's
if you need a consistent database state on backup you have to use a hook script.
Read this thread: https://forum.proxmox.com/threads/mysql-databases-in-vm-backups.72912/
you _must_ configure Rapid Spanning Tree Protocol, STP is probably def
I would not recommend the broadcast setup in my opinion it is error prone.
Try instead the routed setup
ok if you see the drives in proxmox (usually as /dev/sd... ) then it is ok.
At a last resort it is easily possibly to reflash an IR mode controller to IT mode and vice versa (same hardware flashed with Megaraid code is another thing, it is often not possible to flash these beasts to IT/IR Mode)
One more point:
in my experience backing the CEPH OSD's with a SSD is a pain in the ASS in case you need to replace a drive or even the SSD.
Also performance is not as good as you probably want to have it. It is of course much better as just with spinning drives, but at the lowest end you...
If I see it correctly from the SPECS CRA3338 is a Broadcom Controller flashed in the so called "IR" Mode. I'm not sure if it is really possible to expose the drives in JBOD mode.
For CEPH you need to expose the drives directly. So I would recommend to buy a version of the controller which is...
The write bandwith is limited by network _and_ disk latency. A write can only be acknowledged after all replicas did acknowledge.
you say that it is your final project? Are you doing it as a project for a examination as Fachinformatiker?
The design of CEPH requires pool size at least with 3. A Pool size of 2 should never be used in production.
When a pool with size 2 misses an OSD, it has to block traffic as the data protection can not be guaranteed anymore.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.