Search results

  1. G

    [SOLVED] Temporarily disable datastore! [ That was quick! ]

    Hi folks. Is there any way to temporarily disable a datastore and then, bring backup it online? I need do it via cli, please. Cheers.
  2. G

    Still got error when create VLAN and try to use tag! [ SDN disappointed ]

    Ok! But the feature is there! Why tease us with that if we can´t use it?
  3. G

    Still got error when create VLAN and try to use tag! [ SDN disappointed ]

    Hi there. I had create a VLAN in SDN called vlan1. Not VLAN AWARE. Then, I went ahead and created a zone called vnet1. Inside this vnet1 I have set dhcp server. But when I tried to tag a nic in some VM, I got this error: vm vlans are not allowed on vnet vnet1 at...
  4. G

    [SOLVED] How to or is it possible

    /usr/share/doc/pve-manager/examples/spice-example-sh
  5. G

    sda turns to sdb or sdc after install PVE over Debian 12.

    In fact, I am using UUID in /etc/fstab. The Debian installer to that normaly. I don't get it why the other same server with the same set of NVMe don't happen the same issue.
  6. G

    sda turns to sdb or sdc after install PVE over Debian 12.

    Hi everybody Don't if someone can help out. I was able to install Debian 12 over a Dell R650xs using 3 NVME. Strangly enought, the NVMe's appears to me as sda, sdb and sdc. During the Debian installation, everything goes good. After install Proxmox over Debian 12 and reboot, sda turns into sdb...
  7. G

    How to translate tradicional VLAN config to SDN?

    Hi there. Currently, I have the following VLAN configuration in 2 PVE server: auto eno1 iface eno1 inet manual auto eno1.221 iface eno1.221 inet manual auto eno1.223 iface eno1.223 inet manual auto eno1.225 iface eno1.225 inet manual auto eno1.226 iface eno1.226 inet manual auto eno1.227...
  8. G

    [ UPDATE! ] vm vlans are not allowed on vnet!

    I don't who is responsable for the SDN stack, but would like to call @fiona @t.lamprecht @aaron or any other Proxmox staff to that a look of this, and perhaps shed some light for us. Because, AFAIK, SDN is no tech preview anymore, isn't? So I suppose this should be production ready, right? Thanks
  9. G

    [ UPDATE! ] vm vlans are not allowed on vnet!

    Well in my case, everyt single VLAN is works nicely. But I need to do that in a host level, like this: auto eno1 iface eno1 inet manual auto eno1.221 iface eno1.221 inet manual auto eno1.223 iface eno1.223 inet manual auto eno1.225 iface eno1.225 inet manual auto eno1.226 iface eno1.226...
  10. G

    [ UPDATE! ] vm vlans are not allowed on vnet!

    ok! I will answer my own thread. This error message appears when I already had a vlan with the same name. So, for example, I had the vlan defined in the interfaces file, like this: auto enp6s18.10 iface enp6s18.10 inet manual auto vmbr0v10 iface vmbr0v10 inet manual bridge-ports...
  11. G

    [ UPDATE! ] vm vlans are not allowed on vnet!

    Hi there. I had have created a bunch of vlan zone, like ID: vlanzone Bridge: vmbr0 IPAM: pve others parameters I just leave as it is. Than, inside this vlanzone, I had created some vnets like: Name: vnet221 Tag:221 But tried to assigned that vnet221 to a VM, I got this error, when try to...
  12. G

    Downgrade kernel from 6.8 to 6.7

    Hi You can in fact do that by running apt install proxmox-kernel-<TAB><TAB> proxmox-kernel-6.2 proxmox-kernel-6.5.11-5-pve proxmox-kernel-6.5.13-4-pve-signed-template proxmox-kernel-6.8.4-2-pve-signed proxmox-kernel-6.2.16-10-pve...
  13. G

    [SOLVED] How to destroy a CEPH OSD on a node already removed from cluster?

    Before remove the node is not a good practice do an out and than stop, waiting for CEPH rebalance and only than stop, remove and purge the osd?
  14. G

    [SOLVED] Remove or reset cluster configuration.

    This help me right now. Just need to put rm -rf and do the trick perfectly. Thanks
  15. G

    8 server and so what?

    I just thought that separating the storage from the nodes where the VM will run would be a good practice. Just an idea!
  16. G

    8 server and so what?

    All servers 2x 480gb NVME 4x 3.7 TB NVME 2x 1gb nic 4x 10gb nic
  17. G

    8 server and so what?

    Hi folks I have 8 server. What would you do? 1 - all of them with Ceph and vms Or 2 - 5 nodes to Promox Ceph cluster and 3 server for vms? I'll wait for your thoughts and suggestions.
  18. G

    Issue with qm remote-migrate: Target Node Reports as Too Old Despite Newer Version

    Its seems to expect different version of cloudinit. Try to disable/remove the cloudinit driver and try again