Search results

  1. C

    update to 7.2, to use VirGL

    can VirGL be used with the nvidia provided binary driver?. I lost VirGL support after installing NVIDIA-Linux-x86_64-525.85.07-vgpu-kvm-custom.run: Error: TASK ERROR: no DRM render node detected (/dev/dri/renderD*), no GPU? - needed for 'virtio-gl' display Modules: root@bigiron:~# ls -l...
  2. C

    non ZFS VM storage

    Thanks for the feedback. I'm going with XFS+QCOW2 on top of a 2+2 RAID10 virtual disk.
  3. C

    non ZFS VM storage

    Checking the documentation, it seems I won't have compression without ZFS. Thin provisioning should be about the same with LVM-Thin vs XFS+QCOW2, any insights on performance while using snapshots on LVM vs QCOW2?
  4. C

    non ZFS VM storage

    Hello!, I've used ZFS data stores in the past with plain SAS HBAs before and have been pretty happy with it, handling protection & compression. For a given setup, I have: - OS boot: LSI SAS HBA + 1 x KINGSTON SA400S3 + 1 x SanDisk SD5SB2-1 + ZFS Mirror. - Datastore storage: RAID controller...
  5. C

    proxmox 7.0 sdn beta test

    it's a remote location, I can't change the connectivity. Will look into de IPSEC alternative, the DCN team assures the switches shouldn't mess with the VXLAN traffic on the switches.
  6. C

    proxmox 7.0 sdn beta test

    The switches also run VXLAN as TEPs between themselves. Could that be the issue?
  7. C

    proxmox 7.0 sdn beta test

    Interestingly enough, I see with tcpdump outgoing traffic on both sides but no incoming traffic. I haven't setup any firewall rules so far. Ping works with "no fragment" and size=1600 bytes (I have MTU=9000 on the NIC and switch side). edit: added a rule allowing UDP traffic on 4789, but no joy.
  8. C

    proxmox 7.0 sdn beta test

    Hello, is there any tooling to troubleshoot VXLAN zones?. I have the configuration applied in two nodes, but once I move a VM to the second node, connectivity with a VM2 that stays in the original node is lost.
  9. C

    PBS as a VM

    For the third physical node, I have 2x1TB SSD + 3 or 4 x 300GB mechanical disks...
  10. C

    PBS as a VM

    Hello!, As part of a little lab, I have two PVE servers that I would like to setup in a 2-nodes cluster. I have a third node that has a lot of resources that would be probably wasted with PBS (it's similar to the first two nodes). Anyhow, with PVE or PBS, I plan to run QDevice on the third node...
  11. C

    Amazon Linux 2

    My bad, works out of the box, I just forgot to add the Cloud-Init drive.
  12. C

    Amazon Linux 2

    Hello!, Anybody tested the qcow2 Amazon Linux 2 image?, they seem to be prepared to use cloud-init but doesn't seem to work as is with Proxmox 7. There's a seed.iso they provide and you should adjust by hand, but would like to use the stock cloud-init support in PVE. Ref...
  13. C

    Multi-clouds : Vms on Cluster Proxmoxve+Ceph+GPU...

    Well, all I see is kubernetes... Could you please explain what's the role of MAAS and Juju in your setup?
  14. C

    Multi-clouds : Vms on Cluster Proxmoxve+Ceph+GPU...

    Hello!, it's not that clear what's the use case or problem to be solved. Is this trying to match role/functionality of VMware's vCloud Director or Mist.IO?
  15. C

    [SOLVED] Issues with SRIOV-based NIC-passthrough to firewall

    Hello!, any update on this?. Does the VLAN filter comment apply if you do tagging/untagging in the guest? Currently I have everything on a virtio based NIC doing tagging on the OPNsense VM. I would like to move everything to an aggregation of two VFs with Intel NICs. I'm heavily using CARP.
  16. C

    Sharing CEPH storage

    Would any of those options allow to use the very same set of disks for both clusters? (Not having to separate physical disks per cluster)
  17. C

    Sharing CEPH storage

    Hello!, Would it be possible to have an HCI cluster running ceph and VMs and at the same time a second cluster consuming CEPH based storage from the first one without clashing VM IDs for example?
  18. C

    MTU limited to 3030?

    Hello!, I want to report that after upgrading to PVE7, Linux VMs with VMXNET3 now seem to work properly, but ESXi 7 still is limited to 3030 of MTU
  19. C

    Moving to SR-IOV

    Rephrasing the question: Should I use the PF for my bonding configuration on the host (enp129s0f0 + enp130s0f0)?, or should I configure the bond with the first VF (enp129s0f0v0+enp130s0f0v0)?. For each VM is clear I should passthrough the PCI device for the VF, not so clear how can I use the...