I think this has something to do with inter vlan traffic. It may have to go to the UDM-PRO from the 16XG >> 48P PoE which is on a 1gbit uplink. I think if I change the uplink to go from the UDM-PRO to the 16XG direct over the 10G link it should fix that.
I have 3 hosts that have a couple 10GBe interfaces on em. I am having an issue getting a VLAN on a KVM host to run at full 10GBe. It's running at 1GBe from my tests but only on VLAN 8. If I assign the VM IP to be the same subnet as the HOST IP subnet it gets full 10GBe. So I am not sure what I...
This is not a proxmox issue but will leave this here.
I fired up a new VM on another host and tested that one in and out and it runs 10GBe. So will investigate further.
I have 3 hosts. All setup the same way.
I can do iperf from host to host and get 10GB pretty consistently.
[ 3] 0.0- 1.0 sec 1.09 GBytes 9.38 Gbits/sec
[ 3] 1.0- 2.0 sec 1.09 GBytes 9.38 Gbits/sec
[ 3] 2.0- 3.0 sec 1.09 GBytes 9.39 Gbits/sec
But when I try to do it from a VM to a...
So this looks to include nfs mounts I guess, but the total still doesn't make sense. If it's going to factor in "everything", it should be closer to 200TB of storage.
Also, even if i choose the HDD pool of the "hosts" that actually have the HDD disks, it's 20TB short for the total. 3 hosts @ 4...
no offense, but this still doesn’t make any sense. No where, anywhere in this system am I using anywhere close “25TB” of data. I know what’s on ALL my disks on ALL nodes and it’s about 500GB.
it’s a bug...
....
Ok so i wiped out my main data volume and reset the values in the UI so nothing is selected and it shows like this:
This makes no sense and is pretty annoying. Usage is accurate, Storage is making stuff up like our President.
Not quite that. I have Proxmox automatically creating KVM instances with cloning where all you have to do is run the join cluster components. This makes it easy to provision a group of KVM VMs to form a cluster across the nodes. If you want help doing this I can assist but you'll need some...
There isnt much overhead IMO with KVM and the benefits over what Proxmox could bake in wouldn't match the OS management or more true to form management of containers that the real OS world offers IMO.
I personally run both docker swarm and kubernetes and even some portainer local instances across a proxmox cluster using KVM hosts. Pair the KVM provisioning process to automatically create a host on cloning, it makes it very easy and fun to manage. I wouldn't want it any other way really. You...
Thanks Allan,
The usage still doesn't line up and is even more confusing...
This is just the HDD pools (Host1, Host2, Host3) on the hosts that actually have the disks which is 12x10TB (before proxmox upgrade), and it states I have 10TB more than I do disks...
Here is the OSD list and...
I have an HDD pool for Ceph that consists of 12x10TB disks that are spread across 3 nodes so 4x10TB in each.
In datacenter summary I see this:
but in datacenter >> ceph >> performance I see this:
The above "Usage" seems accurate as I am using 3 part replication and currently have about...
I assume you have the 10 bay version. Personally I'd try and put Proxmox on a 3.5 bay maybe on an SSD or a mirrored SSD that is much smaller, then setup 2 pools with the 930GB pairs, span across 4 nodes in 3/2 and 8x2TB in a pool across all nodes as well. Setup the storage network to be on 10Gb...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.