Recent content by holr

  1. H

    Proxmox & Ceph, auditing disks?

    Hello, I am trying to identify, "rogue" or not attached disks in our ceph/proxmox cluster. I run a few auditing type functions on our Ceph storage to find out which VMs are using the most space (as in used, not preallocated), and removing them. I find that: #rbd -p <pool name> du...
  2. H

    VM storage traffic on Ceph

    Oh dear, looks like a schoolboy error on my part! I sincerely appreciate you taking the time to look into the issue. It sounds like modifying the networks in-situ is not without risk; i think it will be a rebuild at some point. Thanks again.
  3. H

    VM storage traffic on Ceph

    Hi Shanreich, thank you for replying. Here's the Ceph configuration, network configuration, and VM configuration in that order below. Ceph Configuration [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network =...
  4. H

    VM storage traffic on Ceph

    Hello, I think I have misunderstood how some of the different networks function within Proxmox. I have a cluster of 9 nodes. Each node has two network cards; a 40Gbit/s dedicated for ceph storage, and a 10Gbit/s for all other networking (management/corosync, user traffic). I had assumed...
  5. H

    Error: Unable to find free port (61000-61099)

    Hello, I only went as far as filing the bug and left it there, sorry. I've had to fall back to novnc until the limit is altered. I only experienced the error with 100 or more spice enabled vms running simultaneously. Sorry I can't be more help!
  6. H

    Error: Unable to find free port (61000-61099)

    Thank you for the recommendation, I filed a bug request at https://bugzilla.proxmox.com/show_bug.cgi?id=2618
  7. H

    Error: Unable to find free port (61000-61099)

    Hello, I have one proxmox server in a cluster (proxmox 6.1-7) that is running many virtual machines. After a recent cloning process to create another batch, I am finding several cannot be started, instead exhibiting the following error message: Error: Unable to find free port (61000-61099)...
  8. H

    Let's encrypt on a multi-node cluster

    Thank you for the hint fabian, I restored our archived pve-ssl.key file and et voila! SPICE is working again. Thank you!
  9. H

    Let's encrypt on a multi-node cluster

    ==== *UPDATE* I restored the backups of the pve-ssl.pem and pve-root-ca.pem on PVE1, which had the knock on effect of synching the correct pve-root-ca.pem across the cluster. All nodes now report OK after running openssl verify -CAfile /etc/pve/pve-root-ca.pem /etc/pve/local/pve-ssl.pem SPICE...
  10. H

    Let's encrypt on a multi-node cluster

    Hello, I have been following the instructions at https://pve.proxmox.com/wiki/Certificate_Management on a 5 node Proxmox cluster. Let's encrypt (using ACME) was used on the first node, PVE1, with great success. Accessing the server shows a valid certificate. Accessing VMs on PVE1 via noVNC...
  11. H

    Thinkpad P1 G2 Wifi

    Whenever I've been in a pinch I'll stick in an older USB wifi adapter that is supported until the kernel is updated to a point where it has the new Wifi adapter supported. Not the best option but a lot easier than compiling your own kernel and dealing with unsupported (from a Proxmox...
  12. H

    Proxmox installation failed (Help!)

    Hello, I'm not sure if this will help with your exact issue, but we've found we had to flash the memory stick in "raw" (or dd) mode with tools e.g. rufus on Windows or with dd on mac/linux. Sometimes the tools flash sticks with "iso" mode which has caused us errors - but I'll admit I can't...
  13. H

    Proxmox + ZFS + HP P440ar on HBA mode

    Hi, I have some experience with HP dl360 gen9 servers with HP P440ar. We set the drives to HBA mode (so each physical disk is exposed to Proxmox for a Ceph cluster), but we had to set up each drive as an individual raid 0 (I am not sure if we can run the drives without this raid 0 approach, but...
  14. H

    Ceph Performance within VMs on Crucial MX500

    Thank you for your reply, and insight. I'll be migrating to a 3/2 in the near future, once I'm able to offload a number of the active VMs on the cluster; so that is sound advice. Do you think the benchmarks with the enterprise-level Samsung drives (PM1643) above look sensible? Those drives are...
  15. H

    Ceph Performance within VMs on Crucial MX500

    I'm using the default as created in Proxmox 5.4 (bluestore). The CEPH pool settings are as follows: [global] auth client required = none auth cluster required = none auth service required = none cluster network = 10.x.x.0/24 debug_asok = 0/0 debug_auth = 0/0...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!