Search results

  1. G

    [ UPDATE! ] vm vlans are not allowed on vnet!

    I don't who is responsable for the SDN stack, but would like to call @fiona @t.lamprecht @aaron or any other Proxmox staff to that a look of this, and perhaps shed some light for us. Because, AFAIK, SDN is no tech preview anymore, isn't? So I suppose this should be production ready, right? Thanks
  2. G

    [ UPDATE! ] vm vlans are not allowed on vnet!

    Well in my case, everyt single VLAN is works nicely. But I need to do that in a host level, like this: auto eno1 iface eno1 inet manual auto eno1.221 iface eno1.221 inet manual auto eno1.223 iface eno1.223 inet manual auto eno1.225 iface eno1.225 inet manual auto eno1.226 iface eno1.226...
  3. G

    [ UPDATE! ] vm vlans are not allowed on vnet!

    ok! I will answer my own thread. This error message appears when I already had a vlan with the same name. So, for example, I had the vlan defined in the interfaces file, like this: auto enp6s18.10 iface enp6s18.10 inet manual auto vmbr0v10 iface vmbr0v10 inet manual bridge-ports...
  4. G

    [ UPDATE! ] vm vlans are not allowed on vnet!

    Hi there. I had have created a bunch of vlan zone, like ID: vlanzone Bridge: vmbr0 IPAM: pve others parameters I just leave as it is. Than, inside this vlanzone, I had created some vnets like: Name: vnet221 Tag:221 But tried to assigned that vnet221 to a VM, I got this error, when try to...
  5. G

    Downgrade kernel from 6.8 to 6.7

    Hi You can in fact do that by running apt install proxmox-kernel-<TAB><TAB> proxmox-kernel-6.2 proxmox-kernel-6.5.11-5-pve proxmox-kernel-6.5.13-4-pve-signed-template proxmox-kernel-6.8.4-2-pve-signed proxmox-kernel-6.2.16-10-pve...
  6. G

    [SOLVED] How to destroy a CEPH OSD on a node already removed from cluster?

    Before remove the node is not a good practice do an out and than stop, waiting for CEPH rebalance and only than stop, remove and purge the osd?
  7. G

    [SOLVED] Remove or reset cluster configuration.

    This help me right now. Just need to put rm -rf and do the trick perfectly. Thanks
  8. G

    8 server and so what?

    I just thought that separating the storage from the nodes where the VM will run would be a good practice. Just an idea!
  9. G

    8 server and so what?

    All servers 2x 480gb NVME 4x 3.7 TB NVME 2x 1gb nic 4x 10gb nic
  10. G

    8 server and so what?

    Hi folks I have 8 server. What would you do? 1 - all of them with Ceph and vms Or 2 - 5 nodes to Promox Ceph cluster and 3 server for vms? I'll wait for your thoughts and suggestions.
  11. G

    Issue with qm remote-migrate: Target Node Reports as Too Old Despite Newer Version

    Its seems to expect different version of cloudinit. Try to disable/remove the cloudinit driver and try again
  12. G

    After add a 2nd nvme, lvm-thin just "broke"... sort of...

    Hi folks There is this customer that has a Supermicro server, and a rise card with a RAID card and 1 nvme. The rise card has 3 slot: First slot has a RAID card Second one has an nvme Third - and last - one was empty In the RAID card he had installed Proxmox VE 7.x. (Ok! I know that is time to...
  13. G

    HA cluster with two node and qDevice + CEPH: don't work HA: why?

    You can have it by using GlusterFS.I have many customers with 2node set up with GlusterFS. PVT for more info.
  14. G

    virtio-win-0.1.262-1 Released

    Well at least we got an alternative.
  15. G

    virtio-win-0.1.262-1 Released

    After the installation, I got a rollback, but it's seem that everything is fine
  16. G

    virtio-win-0.1.262-1 Released

    It's seems to work fine when you do in the Windows installation time. I was able to install virio-scsi, NetKVM and balloon.