Search results

  1. A

    After 6.3 upgrade, VM boot device missing if member of an HA group

    If a virtual machine is not a member of an HA group, you can add a disk, ISO, or network boot device, and expect it will boot to them in the order given on the options screen. If you power on the machine and interrupt POST with the escape key, you can make a one-time selection to manually boot...
  2. A

    Performance decrease after Octopus upgrade

    tres bien, back above 1 GB/s on write. removing the SSD emulation gave a small boost as well.
  3. A

    Performance decrease after Octopus upgrade

    Thanks. Can I just edit the file directly and let corosync take care of the rest, or is there a command I need to run to reload the conf on all nodes? Do you think this explains the difference between Windows \ rbd and rados bench?
  4. A

    Performance decrease after Octopus upgrade

    Just throwing this out there to see if anyone has experienced anything similar. Under Nautilus, our Windows VMs were able to do about 1.5 GB/sec sequential read, and 1.0 GB/sec sequential write. Under Nautilus, our rados bench was showing us 2.0 GB/s sequential read and write, and this was...
  5. A

    Add snapshot=0 option to Hard Disk ?

    I am replying just to say this is a much needed feature. live detach/attach is not realistic.
  6. A

    Crushmap vanished after Networking error

    There is a way to reconstruct the monmap from data residing in the OSDs. I tried it once and was not successful, but I'm nobody. https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds
  7. A

    How to export all VM IDs, name and notes?

    The context of the qm pct and vz commands is limited to the host on which the command was issued. The pvesh commands will give you a wealth of that "cluster perspective." https://pve.proxmox.com/pve-docs/pvesh.1.html How did you google that, and not find the correct answer, but you did find a...
  8. A

    proxmox 7.0 sdn beta test

    Hi Spirit... Can we try to enable some of the "illegal" characters on the vNet ID? Why do you think underscore, period, or hyphen should not be allowed? What about increasing the max length to 16? Thanks
  9. A

    proxmox 7.0 sdn beta test

    Thanks spirit. I knew I read that section in the doc and it just didn't click, but yes it was easy to forget. In the meantime I know it's not pretty but it's the least amount of work sed -i '/interfaces.d/d' /etc/network/interfaces; printf 'source /etc/network/interfaces.d/*' >>...
  10. A

    Ceph Octopus

    Please build out the monitoring functionality so that per-RBD disk and per-pool performance stats can be viewed in the PVE GUI rather than the ceph mgr dashboard or external grafana host.
  11. A

    proxmox 7.0 sdn beta test

    Just to add to this, if I attach my vNet to a NIC on a powered off machine, and then start it, the error is different: bridge 'testNet' does not exist kvm: network script /var/lib/qemu-server/pve-bridge failed with status 512 TASK ERROR: start failed: QEMU exited with code 1 What should I be...
  12. A

    proxmox 7.0 sdn beta test

    VLAN mode is working well in terms of creating/applying the config. So I create a VXLAN zone called "SDN" with MTU 8950 and all my hosts' vmbr0 addresses in the peer list. Then I create a net called testNet, tag 9999 and the rest on auto. When I hit apply, the "pending" zone turns to error...
  13. A

    proxmox 7.0 sdn beta test

    thanks guys. I will go with jumbo frames on the physical network as well.
  14. A

    proxmox 7.0 sdn beta test

    yikes, I don't know how I could have missed that. sorry. thanks for the great module. when people outgrow the simple VLAN and have to go for more encapsulation, how do they handle reducing the mtu on thousands of nics? is there any real performance hit by lowering the mtu for the other modes?
  15. A

    proxmox 7.0 sdn beta test

    Is anyone using this in production, even for the simple VLAN use case? I can't seem to create a zone and net that doesn't get the warning icon. The pve gui bugs out forcing a full screen refresh, and VMs will not start that are attached to my net: bridge 'testNet' does not exist kvm: network...
  16. A

    Is qemu guest agent required to TRIM thin provisioned storage?

    The VM has been powered on for 100 hours and has not been trimmed. FreeBSD says the file system has trim enabled, however. [2.4.5-RELEASE][root@fw00]/root: tunefs -p /dev/da0p3 tunefs: POSIX.1e ACLs: (-a) disabled tunefs: NFSv4 ACLs: (-N)...
  17. A

    Is qemu guest agent required to TRIM thin provisioned storage?

    I have a 15GB pfSense machine that has about 1 GB used on its UFS file system. Back-end is NVMe Ceph RBD. Ceph shows the disk is 15 GB with 14 GB used. There is currently no option for qemu guest agent on pfSense, and I've noticed in the few hours its been running that it has not been TRIMmed...
  18. A

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    We are using PVE and Ceph in Dell blades using the M1000e modular chassis. We are currently using dual mezzanine cards with 2x 10 GbE ports, one for Ceph front-end and one for Ceph back-end. Public LANs, guest LANs, and Corosync are handled by 4x 10GbE cards on 40GbE MXL switches, so all is...
  19. A

    Proxmox VE 6.2 released!

    Thanks, I'm familiar with corosync.conf, just curious how the GUI was coming along. You wouldn't happen to have any links to some further reading on the subject of those larger clusters, would you?
  20. A

    Proxmox VE 6.2 released!

    One of the announcements was support for up to 8 corosync links. If more independent corosync links are used, does this mean it is more reasonable to have larger clusters, beyond 32 nodes? If I have a cluster up and running currently with only link0, how can I configure more links?