Search results

  1. G

    Proxmox Datacenter Manager 0.9 Beta released!

    I think Datacenter should be more visible, since we need to manage VMs that inside a node within a datacenter. See the image. The information regard the node itself could be below right where the Datacenter is. Just a matter of positioning this panels to make the information easier to find and...
  2. G

    Proxmox Datacenter Manager 0.9 Beta released!

    Hi... I installed here and add 2 nodes with is part of one datacenter. I noticed that inside the Remote the datacenter nodes are like, hidden.... Is that so?
  3. G

    VXLAN with low bitrate transfer even with 10G SFP+.

    ok... Another thing I noticed is when I tag the nic in the VM, I get around 1 GB/s and without the tag, I get around 3 GB/s. But I need use tag on the VM NIC, in order to isolate the traffic. Another question is if the switch need some special configuration regards VxLAN or fragmentation...
  4. G

    VXLAN with low bitrate transfer even with 10G SFP+.

    As I state before, with 8950 I got around 3 G/s, but with 1450 the bitrate transfer drop to around 1 G/S!
  5. G

    VXLAN with low bitrate transfer even with 10G SFP+.

    Hi.. Thanks for reply. This is the result of iperf on the real nic, on the hosts: ----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- Accepted connection from 172.18.0.10, port 33718 [ 5]...
  6. G

    VXLAN with low bitrate transfer even with 10G SFP+.

    Hi... Hope everybody is ok! I had have noticed something weird... Or, perhaps, is just the way it is... I have 3 nodes, with 3 nic, 2x 1g and 1x 10G. So I went ahead and create a vxlan and a zone like that: pve101:/etc/pve/sdn# cat vnets.cfg vnet: vxnet1 zone vxzone tag 100000...
  7. G

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    Hi again... I have reinstall all Proxmox nodes and install ceph on each node. Create the mons and mgr on eatch node. I have issue the command ceph-volume lvm activate --all, on each node, in order bring up the /var/lib/ceph/osd/<node> After that I ran this commands: ceph-volume lvm activate...
  8. G

    [TUTORIAL] Inside Proxmox VE 9 SAN Snapshot Support

    Oh I see... I was under the assumption that this is only to LVM in combination with SAN, because this a common scenario in Enterprise. But I am glad that this will works on any other storages. Thanks
  9. G

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    Hi Do I need to add a mon/mgr AFTER or BEFORE I issue the command ceph-volume lvm activate --all ? Because as far I can tell, the old OSD are available only after ceph-volume lvm activate --all, right? So in order to get some information from those OSD I need to trigger the command ceph-volume...
  10. G

    [TUTORIAL] Inside Proxmox VE 9 SAN Snapshot Support

    Good job @spirit I would to suggest that this plugin should be displayed only when we add LVM-based storage, back there in Datacenter -> Storage, since I think this is to be use only with LVM + SAN, right? Cheers
  11. G

    Severe system freeze with NFS on Proxmox 9 running kernel 6.14.8-2-pve when mounting NFS shares

    I tried here with Proxmox 9.0.5 and kernel 6.14 and no issue! Using TrueNAS-SCALE-25.04.2.1. The only think I saw was the in the web gui, I only could see the nfs share if I choose ver=3. But after added, I had have edited the /etc/pve/storage.cfg and changed it to ver=4 and works fine, to both...
  12. G

    Severe system freeze with NFS on Proxmox 9 running kernel 6.14.8-2-pve when mounting NFS shares

    But what about the NFS server side? Is it a Linux Box? Which one? FreeNas/TrueNas??? Qnap??? Synology??? Need more info
  13. G

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    Perhaps that's the way? https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds
  14. G

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    No need for rush! Is a test environment... But if you guys, could help me that I will much obliged. Thanks
  15. G

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    Thank you for you replay @UdoB But I think if I do a wipe disk I will permanently lose the data, isn't? is there a way to do without lose the data? Perhaps there is a misunderstanding... I lost all 3 server, but the OSD's disks. After I re-install the PVE and CEPH on the 3 node, I have had...
  16. G

    Add previously OSD to a new installation of a 3 nodes Proxmox CEPH

    Hi I have 3 nodes Proxmox Cluster with CEPH, and after a crash, I have to reinstall Proxmox from scratch, along with Ceph. OSD are intact. I already did ceph-volume lvm activate --all and the OSD appears with ceph-volum lvm list and I got a folder with the name of the OSD under...
  17. G

    Where pve-network-interface-pinning store it's configuration?

    Sorry! It was right there in my face! Shame on me... This will generate name pinning configuration for the interface 'enp1s0' - continue (y/N)? y Name for link 'enp1s0' (enx525400d99ca2) will change to 'eth0' Generating link files Successfully generated .link files in...