Hello,
I am trying to identify, "rogue" or not attached disks in our ceph/proxmox cluster.
I run a few auditing type functions on our Ceph storage to find out which VMs are using the most space (as in used, not preallocated), and removing them.
I find that:
#rbd -p <pool name> du...
Oh dear, looks like a schoolboy error on my part! I sincerely appreciate you taking the time to look into the issue. It sounds like modifying the networks in-situ is not without risk; i think it will be a rebuild at some point. Thanks again.
Hi Shanreich, thank you for replying. Here's the Ceph configuration, network configuration, and VM configuration in that order below.
Ceph Configuration
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network =...
Hello,
I think I have misunderstood how some of the different networks function within Proxmox. I have a cluster of 9 nodes. Each node has two network cards; a 40Gbit/s dedicated for ceph storage, and a 10Gbit/s for all other networking (management/corosync, user traffic). I had assumed...
Hello, I only went as far as filing the bug and left it there, sorry. I've had to fall back to novnc until the limit is altered. I only experienced the error with 100 or more spice enabled vms running simultaneously. Sorry I can't be more help!
Hello,
I have one proxmox server in a cluster (proxmox 6.1-7) that is running many virtual machines. After a recent cloning process to create another batch, I am finding several cannot be started, instead exhibiting the following error message:
Error: Unable to find free port (61000-61099)...
====
*UPDATE*
I restored the backups of the pve-ssl.pem and pve-root-ca.pem on PVE1, which had the knock on effect of synching the correct pve-root-ca.pem across the cluster. All nodes now report OK after running
openssl verify -CAfile /etc/pve/pve-root-ca.pem /etc/pve/local/pve-ssl.pem
SPICE...
Hello,
I have been following the instructions at https://pve.proxmox.com/wiki/Certificate_Management on a 5 node Proxmox cluster. Let's encrypt (using ACME) was used on the first node, PVE1, with great success. Accessing the server shows a valid certificate. Accessing VMs on PVE1 via noVNC...
Whenever I've been in a pinch I'll stick in an older USB wifi adapter that is supported until the kernel is updated to a point where it has the new Wifi adapter supported. Not the best option but a lot easier than compiling your own kernel and dealing with unsupported (from a Proxmox...
Hello,
I'm not sure if this will help with your exact issue, but we've found we had to flash the memory stick in "raw" (or dd) mode with tools e.g. rufus on Windows or with dd on mac/linux. Sometimes the tools flash sticks with "iso" mode which has caused us errors - but I'll admit I can't...
Hi, I have some experience with HP dl360 gen9 servers with HP P440ar. We set the drives to HBA mode (so each physical disk is exposed to Proxmox for a Ceph cluster), but we had to set up each drive as an individual raid 0 (I am not sure if we can run the drives without this raid 0 approach, but...
Thank you for your reply, and insight. I'll be migrating to a 3/2 in the near future, once I'm able to offload a number of the active VMs on the cluster; so that is sound advice.
Do you think the benchmarks with the enterprise-level Samsung drives (PM1643) above look sensible? Those drives are...
I'm using the default as created in Proxmox 5.4 (bluestore). The CEPH pool settings are as follows:
[global]
auth client required = none
auth cluster required = none
auth service required = none
cluster network = 10.x.x.0/24
debug_asok = 0/0
debug_auth = 0/0...
I've since replaced all the crucial MX500 (Sata 6gbps, rated as 560MB/s read, 510MB/s write) with Samsung PM1643 (12gbps SAS, rated as 2100MB/s read, 1700MB/s write) and get the following crystaldiskmark results:
(default nocache)
(writeback cache):
Considering the rated speed of the new...
Hello,
I'm trying to get the Mac OSX version of virt-viewer (remote-viewer) working with USB Device Selection. The instructions for building located at https://www.spice-space.org/osx-client.html seem a bit out of date (0.5.7 is the latest bundle provided) but thankfully it was pretty...
Hello, it is a p440ar, running the command gives:
bytes written: 1073741824
blocksize: 4194304
elapsed_sec: 2.426355
bytes_per_sec: 442532805.034562
iops: 105.508043
Hello,
I think I have not set up things correctly on my 4 node proxmox-with-ceph cluster. I'm using consumer Crucial MS500 SSDs on HP DL380 Gen9 servers (1tb ram). Seven 2TB hard drives are set up as individual raid 0 (I couldn't find a pure HBA option in the server config) drives for ceph...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.