Recent content by Gilberto Ferreira

  1. G

    [SOLVED] qemu/kvm: 'gluster' is deprecated ?

    Well... It's ok. Sad, but ok. Usually I create a gluster volumes and mount both nodes. Then, I share this as type directory storage. Never use the glusterfs plugin. So, for qemu/kvm and proxmox standpoint, is always a directory, right? Like I do here...
  2. G

    [SOLVED] qemu/kvm: 'gluster' is deprecated ?

    [ SOLVED ] I just change from the GlusterFS plugin to directory pulgin and the warning message is gone. I usually to a gluster volume and define it inside /etc/fstab file, like this: serverA: gluster1:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0...
  3. G

    [SOLVED] qemu/kvm: 'gluster' is deprecated ?

    Hi Is the gluster not support properly anymore? I set two PVE box with Proxmox VE 8.4 and GlusterFS 10, which is shipped with Debian Bookworm. I have had used gluster for many years now, with no issues at all. You guys will, eventually kick off GlusterFS for good? More warning messages...
  4. G

    Where is the proxmox-backup-client static version to download?

    I wonder if this static file is executable via WSL in a Windows box!
  5. G

    Where is the proxmox-backup-client static version to download?

    Thank you. From the ArchLinux based systems, already have a AUR package: yay proxmox-backup-client 2 aur/proxmox-backup-client-bin 3.2.6_1-3 (+0 0.00) (Desatualizado: 2025-03-30) Client for Proxmox Backup Server (binary release from Debian) 1 aur/proxmox-backup-client 3.3.4-1 (+20 0.57)...
  6. G

    Where is the proxmox-backup-client static version to download?

    Hi there Simple like that: where I can download the static version of proxmox-backup-client? I found this (0), but I thing it's a little old. Is there any newer version? Thanks. (0) - Index of /temp/proxmox-backup-client-static/v3.2.7/
  7. G

    [TUTORIAL] ZFS RAIDz expand (unofficial)

    Hi there. After messing around with last version of OpenZFS, I decide to create this little tutorial that, I hope, can help one else. First of all, I have a Proxmox VE 8.3.5 up to date. Then is necessary install a couple of things: apt install proxmox-headers-6.11 alien autoconf automake...
  8. G

    [SOLVED] Full mesh with 3 Proxmox and IPv6!

    Well... After bang my head in the wall a couple of times, I decide to use RSTP Loop Setup with OVS. Works as a charm. With frr everytime I need to create a new empty bridge for instance, but cluster goes down. Now with RSTP I think is more reliable. Thanks a lot for the help.
  9. G

    Wrong time duration showing up in the webgui, for guest migration.

    Hi there. I have a 3 node Proxmox VE and Ceph, with Full Mesh Network configuraion. Everything is fine, but I noticed something weird today. When I migrate a simple VM with Debian 12 installed and nothing more, I got a wrong time duration being displayed in the web gui, like the screenshot...
  10. G

    [SOLVED] Full mesh with 3 Proxmox and IPv6!

    So, I tried to combine both configuration, and works ok. But now I am trying to use one 10g nic to ceph-net and the other to ceph-osd. Here is the frr.conf config: frr version 8.5.2 frr defaults traditional ipv6 forwarding hostname proxmox01 log syslog informational service...
  11. G

    [SOLVED] Full mesh with 3 Proxmox and IPv6!

    Nah! Doesn't work! Any ideas will be welcome.
  12. G

    [SOLVED] Full mesh with 3 Proxmox and IPv6!

    Hi folks... It's me again. Just a little update. I figure out that doing this way will be a better approach: frr version 8.5.2 frr defaults traditional hostname proxmox01 log syslog informational service integrated-vtysh-config ! interface lo ipv6 ospf6 area 0.0.0.0 exit ! router ospf6 ospf6...
  13. G

    [SOLVED] Full mesh with 3 Proxmox and IPv6!

    Hi there. I am trying to implement a Full Mesh network with 3 servers. I am using this guide: https://packetpushers.net/blog/proxmox-ceph-full-mesh-hci-cluster-w-dynamic-routing/ My 3 nodes has: 2x 1G NIC Port1 = eno8303 Port2 = eno8403 for the LAN and PVE access. This is my vmbr0 in each...