First I tried this, but I found out later they weren't persistent:
echo eth >/sys/bus/pci/devices/0000:XX:00.0/mlx4_port1
echo eth >/sys/bus/pci/devices/0000:XX:00.0/mlx4_port2
It's not just not persistent through reboots, or ifdowns.
Basically for drivers on linux, you install the Mellanox...
So for the record everyone, as I couldn't get the monitors or osds to bind to more than one address (IPv6 or IPv4), we simply created some static routes on the standalone host, it that enabled it to reach the segregated subnet that ceph is on.
Of course first we had to have IPv6 addresses on...
Hello. To help make migration much easier, I'd like to connect a standalone node I have to the Proxmox managed ceph storage on a three node cluster by adding that storage as RBD on the standalone. The issue is, the current ceph public_network is isolated, because it's directly attached 40G NICs...
Hello all,
We're looking for Best Practices regarding setting IOPS and Throughput limits on "Hard Disks" in proxmox.
There's obviously limit and brust settings under Advanced on a given disk.
Questions we have:
Is there a way to see, from the Host level, the current IOPS/Throughput a...
So we had this problem again, with the same setup as described at the start of this thread, but while live migrating a VM that didn't have discard set :/
The only thing I could find worth noting was that this VM did not have "format=raw", as mentioned in the bug referenced above.
What happened...
We have ZFS for proxmox OS, and then a separate volume for VMs. We're not using LVM, but rather just ZFS, which I believe simply supports thin provisioning (otherwise we would see the storage usage shrink when we "fstrim -a" when discard is set).
Good to know. That said, is there a certain percentage of frag that I should monitor for, that way I don't have to find out when customers start complaining about performance issues?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.