Search results

  1. K

    Certificates: acme and Sectigo

    It does not work with Sectigo CA: Attempting to register account with 'https://acme.sectigo.com/v2/OV'.. Generating ACME account key.. Registering ACME account.. Registration failed: Error: POST to https://acme.sectigo.com/v2/OV/newAccount...
  2. K

    Certificates: acme and Sectigo

    Hi, i want to use acme for Certificates from Sectigo. But I'm missing the options for the external binding as the registration needs both an eab-kid and eab-hmac-key parameter. With certbot these are the --eab-kid and --eab-hmac-key with the appropriate values from our Sectigo Account
  3. K

    What's Next?

    A feature I would like in the Installer: Support for setting up Bonds and VLAN's in the Installer
  4. K

    virtio network driver info

    This would be interesting for me too. We have several applications which would need the link speed inside the VM: -> Samba ->it would use multiple Streams only with link speeds of 10Gbit/s and up, and the number of streams is dependent of the Link speed Same for Lustre and Irods. As it is...
  5. K

    PVE Ceph - Upgrade current OSDs with larger ones, 8 OSD (Filestore) shares a single journal NVMe

    In principal this is the right way to do it, but why not bluestore? It is working fine.
  6. K

    ZFS on top enterprise hardware RAID is bad: any evidence or first-hand experience?

    It is not a myth, just don't use a Raid Controller or anything similar below a ZFS Pool. ZFS likes to see all disks directly. Some features even require this, for example the protection against silent bitrot. Just use adequat SAS oder NVME HBA's
  7. K

    Mysql corruption

    if you need a consistent database state on backup you have to use a hook script. Read this thread: https://forum.proxmox.com/threads/mysql-databases-in-vm-backups.72912/
  8. K

    Ceph Slow Ops if one node is rebooting (Proxmox 7.0-14 Ceph 16.2.6)

    you _must_ configure Rapid Spanning Tree Protocol, STP is probably def I would not recommend the broadcast setup in my opinion it is error prone. Try instead the routed setup
  9. K

    Ceph Slow Ops if one node is rebooting (Proxmox 7.0-14 Ceph 16.2.6)

    How did you configure the switchless networks? STP or RSTP ? it sounds like it takes too long a time to rebuild the Switch Root Tree
  10. K

    Proxmox VE on Gigabyte R281-3C1 with CRA3338 array controller

    ok if you see the drives in proxmox (usually as /dev/sd... ) then it is ok. At a last resort it is easily possibly to reflash an IR mode controller to IT mode and vice versa (same hardware flashed with Megaraid code is another thing, it is often not possible to flash these beasts to IT/IR Mode)
  11. K

    Proxmox VE on Gigabyte R281-3C1 with CRA3338 array controller

    One more point: in my experience backing the CEPH OSD's with a SSD is a pain in the ASS in case you need to replace a drive or even the SSD. Also performance is not as good as you probably want to have it. It is of course much better as just with spinning drives, but at the lowest end you...
  12. K

    Proxmox VE on Gigabyte R281-3C1 with CRA3338 array controller

    If I see it correctly from the SPECS CRA3338 is a Broadcom Controller flashed in the so called "IR" Mode. I'm not sure if it is really possible to expose the drives in JBOD mode. For CEPH you need to expose the drives directly. So I would recommend to buy a version of the controller which is...
  13. K

    Ceph BlockStorage Speed seems to limited

    The write bandwith is limited by network _and_ disk latency. A write can only be acknowledged after all replicas did acknowledge. you say that it is your final project? Are you doing it as a project for a examination as Fachinformatiker?
  14. K

    [SOLVED] Proxmox / CEPH maintenance causes VM's to be unresponsive

    The design of CEPH requires pool size at least with 3. A Pool size of 2 should never be used in production. When a pool with size 2 misses an OSD, it has to block traffic as the data protection can not be guaranteed anymore.
  15. K

    Ceph performance with simple hardware. Slow writing.

    If you want HA, things always get more expensive, and yes Latency is much better with 10G. But I bet that you can at least gain some speed with DB/WAL on SSD. Bad that hardware availability is so bad in Brazil. Hope you can find something used. Also try jumbo frames on you gig Network (hope your...
  16. K

    Ceph performance with simple hardware. Slow writing.

    Reading is in CEPH always ways faster, as it can be read from the nearest OSD Node, in best case this is the local node. Writing is a different thing, as CEPH has to mirror it to replicas in the background (on the backend network) and acknowledges to the client after the last replica...
  17. K

    Ceph performance with simple hardware. Slow writing.

    Spinning HD's and a 1 GIgabit network are just not fast enough for the specific workload of VM's CEPH is very latency dependant. A setup with spinning disks and a 1 Gigabit/s network is just good for a read intensive buld storage or a experimental setup. don't ever try it with VM's
  18. K

    question about bond count on Ceph nodes

    That depends heaviily on the modes supported by both operating system and the switch side! not all static bond modes work with every switch and can lead to very annoying errors. With many switch brands only active/passive bonding works reliable. LACP is more dynamic and detects failures of...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!