It does not work with Sectigo CA:
Attempting to register account with 'https://acme.sectigo.com/v2/OV'..
Generating ACME account key..
Registering ACME account..
Registration failed: Error: POST to https://acme.sectigo.com/v2/OV/newAccount...
Hi,
i want to use acme for Certificates from Sectigo. But I'm missing the options for the external binding as the registration needs both an eab-kid and eab-hmac-key parameter.
With certbot these are the --eab-kid and --eab-hmac-key with the appropriate values from our Sectigo Account
This would be interesting for me too.
We have several applications which would need the link speed inside the VM:
-> Samba ->it would use multiple Streams only with link speeds of 10Gbit/s and up, and the number of streams is dependent of the Link speed
Same for Lustre and Irods.
As it is...
It is not a myth, just don't use a Raid Controller or anything similar below a ZFS Pool. ZFS likes to see all disks directly. Some features even require this, for example the protection against silent bitrot.
Just use adequat SAS oder NVME HBA's
if you need a consistent database state on backup you have to use a hook script.
Read this thread: https://forum.proxmox.com/threads/mysql-databases-in-vm-backups.72912/
you _must_ configure Rapid Spanning Tree Protocol, STP is probably def
I would not recommend the broadcast setup in my opinion it is error prone.
Try instead the routed setup
ok if you see the drives in proxmox (usually as /dev/sd... ) then it is ok.
At a last resort it is easily possibly to reflash an IR mode controller to IT mode and vice versa (same hardware flashed with Megaraid code is another thing, it is often not possible to flash these beasts to IT/IR Mode)
One more point:
in my experience backing the CEPH OSD's with a SSD is a pain in the ASS in case you need to replace a drive or even the SSD.
Also performance is not as good as you probably want to have it. It is of course much better as just with spinning drives, but at the lowest end you...
If I see it correctly from the SPECS CRA3338 is a Broadcom Controller flashed in the so called "IR" Mode. I'm not sure if it is really possible to expose the drives in JBOD mode.
For CEPH you need to expose the drives directly. So I would recommend to buy a version of the controller which is...
The write bandwith is limited by network _and_ disk latency. A write can only be acknowledged after all replicas did acknowledge.
you say that it is your final project? Are you doing it as a project for a examination as Fachinformatiker?
The design of CEPH requires pool size at least with 3. A Pool size of 2 should never be used in production.
When a pool with size 2 misses an OSD, it has to block traffic as the data protection can not be guaranteed anymore.
If you want HA, things always get more expensive, and yes Latency is much better with 10G. But I bet that you can at least gain some speed with DB/WAL on SSD. Bad that hardware availability is so bad in Brazil. Hope you can find something used.
Also try jumbo frames on you gig Network (hope your...
Reading is in CEPH always ways faster, as it can be read from the nearest OSD Node, in best case this is the local node.
Writing is a different thing, as CEPH has to mirror it to replicas in the background (on the backend network) and acknowledges to the client after the last replica...
Spinning HD's and a 1 GIgabit network are just not fast enough for the specific workload of VM's
CEPH is very latency dependant.
A setup with spinning disks and a 1 Gigabit/s network is just good for a read intensive buld storage or a experimental setup.
don't ever try it with VM's
That depends heaviily on the modes supported by both operating system and the switch side!
not all static bond modes work with every switch and can lead to very annoying errors. With many switch brands only active/passive bonding works reliable.
LACP is more dynamic and detects failures of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.