Search results

  1. A

    Performance decrease after Octopus upgrade

    Thanks. Can I just edit the file directly and let corosync take care of the rest, or is there a command I need to run to reload the conf on all nodes? Do you think this explains the difference between Windows \ rbd and rados bench?
  2. A

    Performance decrease after Octopus upgrade

    Just throwing this out there to see if anyone has experienced anything similar. Under Nautilus, our Windows VMs were able to do about 1.5 GB/sec sequential read, and 1.0 GB/sec sequential write. Under Nautilus, our rados bench was showing us 2.0 GB/s sequential read and write, and this was...
  3. A

    Add snapshot=0 option to Hard Disk ?

    I am replying just to say this is a much needed feature. live detach/attach is not realistic.
  4. A

    Crushmap vanished after Networking error

    There is a way to reconstruct the monmap from data residing in the OSDs. I tried it once and was not successful, but I'm nobody. https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds
  5. A

    How to export all VM IDs, name and notes?

    The context of the qm pct and vz commands is limited to the host on which the command was issued. The pvesh commands will give you a wealth of that "cluster perspective." https://pve.proxmox.com/pve-docs/pvesh.1.html How did you google that, and not find the correct answer, but you did find a...
  6. A

    proxmox 7.0 sdn beta test

    Hi Spirit... Can we try to enable some of the "illegal" characters on the vNet ID? Why do you think underscore, period, or hyphen should not be allowed? What about increasing the max length to 16? Thanks
  7. A

    proxmox 7.0 sdn beta test

    Thanks spirit. I knew I read that section in the doc and it just didn't click, but yes it was easy to forget. In the meantime I know it's not pretty but it's the least amount of work sed -i '/interfaces.d/d' /etc/network/interfaces; printf 'source /etc/network/interfaces.d/*' >>...
  8. A

    Ceph Octopus

    Please build out the monitoring functionality so that per-RBD disk and per-pool performance stats can be viewed in the PVE GUI rather than the ceph mgr dashboard or external grafana host.
  9. A

    proxmox 7.0 sdn beta test

    Just to add to this, if I attach my vNet to a NIC on a powered off machine, and then start it, the error is different: bridge 'testNet' does not exist kvm: network script /var/lib/qemu-server/pve-bridge failed with status 512 TASK ERROR: start failed: QEMU exited with code 1 What should I be...
  10. A

    proxmox 7.0 sdn beta test

    VLAN mode is working well in terms of creating/applying the config. So I create a VXLAN zone called "SDN" with MTU 8950 and all my hosts' vmbr0 addresses in the peer list. Then I create a net called testNet, tag 9999 and the rest on auto. When I hit apply, the "pending" zone turns to error...
  11. A

    proxmox 7.0 sdn beta test

    thanks guys. I will go with jumbo frames on the physical network as well.
  12. A

    proxmox 7.0 sdn beta test

    yikes, I don't know how I could have missed that. sorry. thanks for the great module. when people outgrow the simple VLAN and have to go for more encapsulation, how do they handle reducing the mtu on thousands of nics? is there any real performance hit by lowering the mtu for the other modes?
  13. A

    proxmox 7.0 sdn beta test

    Is anyone using this in production, even for the simple VLAN use case? I can't seem to create a zone and net that doesn't get the warning icon. The pve gui bugs out forcing a full screen refresh, and VMs will not start that are attached to my net: bridge 'testNet' does not exist kvm: network...
  14. A

    Is qemu guest agent required to TRIM thin provisioned storage?

    The VM has been powered on for 100 hours and has not been trimmed. FreeBSD says the file system has trim enabled, however. [2.4.5-RELEASE][root@fw00]/root: tunefs -p /dev/da0p3 tunefs: POSIX.1e ACLs: (-a) disabled tunefs: NFSv4 ACLs: (-N)...
  15. A

    Is qemu guest agent required to TRIM thin provisioned storage?

    I have a 15GB pfSense machine that has about 1 GB used on its UFS file system. Back-end is NVMe Ceph RBD. Ceph shows the disk is 15 GB with 14 GB used. There is currently no option for qemu guest agent on pfSense, and I've noticed in the few hours its been running that it has not been TRIMmed...
  16. A

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    We are using PVE and Ceph in Dell blades using the M1000e modular chassis. We are currently using dual mezzanine cards with 2x 10 GbE ports, one for Ceph front-end and one for Ceph back-end. Public LANs, guest LANs, and Corosync are handled by 4x 10GbE cards on 40GbE MXL switches, so all is...
  17. A

    Proxmox VE 6.2 released!

    Thanks, I'm familiar with corosync.conf, just curious how the GUI was coming along. You wouldn't happen to have any links to some further reading on the subject of those larger clusters, would you?
  18. A

    Proxmox VE 6.2 released!

    One of the announcements was support for up to 8 corosync links. If more independent corosync links are used, does this mean it is more reasonable to have larger clusters, beyond 32 nodes? If I have a cluster up and running currently with only link0, how can I configure more links?
  19. A

    Choose between Samsung PM983 or Intel DC P4510

    These are the 2 drives I'm looking at right now for a 16-32 node PVE6+Ceph RBD setup. Referring to the 2018 performance doc, https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark They ran an fio command to represent expected 4KQD1 performance as it pertains to OSD journal...
  20. A

    redirecting intense write activity, Folder2RAM, etc...

    Has anyone ever looked at Folder2RAM: https://github.com/bobafetthotmail/folder2ram I see it more often used in conjunction with OMV but as it's also Debian-based I wanted to mention it here to get your thoughts. Consider a PVE/Ceph setup comprised of blade servers such as Dell M610, M620...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!