Thanks. Can I just edit the file directly and let corosync take care of the rest, or is there a command I need to run to reload the conf on all nodes?
Do you think this explains the difference between Windows \ rbd and rados bench?
Just throwing this out there to see if anyone has experienced anything similar.
Under Nautilus, our Windows VMs were able to do about 1.5 GB/sec sequential read, and 1.0 GB/sec sequential write.
Under Nautilus, our rados bench was showing us 2.0 GB/s sequential read and write, and this was...
There is a way to reconstruct the monmap from data residing in the OSDs.
I tried it once and was not successful, but I'm nobody.
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds
The context of the qm pct and vz commands is limited to the host on which the command was issued.
The pvesh commands will give you a wealth of that "cluster perspective."
https://pve.proxmox.com/pve-docs/pvesh.1.html
How did you google that, and not find the correct answer, but you did find a...
Hi Spirit... Can we try to enable some of the "illegal" characters on the vNet ID? Why do you think underscore, period, or hyphen should not be allowed? What about increasing the max length to 16?
Thanks
Thanks spirit. I knew I read that section in the doc and it just didn't click, but yes it was easy to forget.
In the meantime I know it's not pretty but it's the least amount of work
sed -i '/interfaces.d/d' /etc/network/interfaces; printf 'source /etc/network/interfaces.d/*' >>...
Please build out the monitoring functionality so that per-RBD disk and per-pool performance stats can be viewed in the PVE GUI rather than the ceph mgr dashboard or external grafana host.
Just to add to this, if I attach my vNet to a NIC on a powered off machine, and then start it, the error is different:
bridge 'testNet' does not exist
kvm: network script /var/lib/qemu-server/pve-bridge failed with status 512
TASK ERROR: start failed: QEMU exited with code 1
What should I be...
VLAN mode is working well in terms of creating/applying the config.
So I create a VXLAN zone called "SDN" with MTU 8950 and all my hosts' vmbr0 addresses in the peer list.
Then I create a net called testNet, tag 9999 and the rest on auto. When I hit apply, the "pending" zone turns to error...
yikes, I don't know how I could have missed that. sorry. thanks for the great module.
when people outgrow the simple VLAN and have to go for more encapsulation, how do they handle reducing the mtu on thousands of nics? is there any real performance hit by lowering the mtu for the other modes?
Is anyone using this in production, even for the simple VLAN use case?
I can't seem to create a zone and net that doesn't get the warning icon. The pve gui bugs out forcing a full screen refresh, and VMs will not start that are attached to my net:
bridge 'testNet' does not exist
kvm: network...
The VM has been powered on for 100 hours and has not been trimmed.
FreeBSD says the file system has trim enabled, however.
[2.4.5-RELEASE][root@fw00]/root: tunefs -p /dev/da0p3
tunefs: POSIX.1e ACLs: (-a) disabled
tunefs: NFSv4 ACLs: (-N)...
I have a 15GB pfSense machine that has about 1 GB used on its UFS file system. Back-end is NVMe Ceph RBD.
Ceph shows the disk is 15 GB with 14 GB used.
There is currently no option for qemu guest agent on pfSense, and I've noticed in the few hours its been running that it has not been TRIMmed...
We are using PVE and Ceph in Dell blades using the M1000e modular chassis.
We are currently using dual mezzanine cards with 2x 10 GbE ports, one for Ceph front-end and one for Ceph back-end.
Public LANs, guest LANs, and Corosync are handled by 4x 10GbE cards on 40GbE MXL switches, so all is...
Thanks, I'm familiar with corosync.conf, just curious how the GUI was coming along.
You wouldn't happen to have any links to some further reading on the subject of those larger clusters, would you?
One of the announcements was support for up to 8 corosync links.
If more independent corosync links are used, does this mean it is more reasonable to have larger clusters, beyond 32 nodes?
If I have a cluster up and running currently with only link0, how can I configure more links?
These are the 2 drives I'm looking at right now for a 16-32 node PVE6+Ceph RBD setup.
Referring to the 2018 performance doc,
https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark
They ran an fio command to represent expected 4KQD1 performance as it pertains to OSD journal...
Has anyone ever looked at Folder2RAM:
https://github.com/bobafetthotmail/folder2ram
I see it more often used in conjunction with OMV but as it's also Debian-based I wanted to mention it here to get your thoughts.
Consider a PVE/Ceph setup comprised of blade servers such as Dell M610, M620...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.