Recent content by Gilberto Ferreira

  1. G

    ZFS Storage replication: is there any way to use other Target instead that one included in cluster?

    Hi... I am trying use another target, and therefor use other network to make the ZFS Replication. But when I tried I got this error: pvesr create-local-job 100-0 rep_link1 --schedule "*/5" Target 'rep_link1' does not exist. rep_link1 is inside /etc/hosts, pointed to another IP address/link...
  2. G

    Proxmox VE 9 existing ZFS vdev expansion?

    Any RAID is not a backup. Doesn't matter how is do it: hardware or software. RAID is RAID. Backup is another thing...
  3. G

    Proxmox VE 9 existing ZFS vdev expansion?

    I know almost nothing about zfs and related stuff, but it's should not be a resilvering in place at this point? But I guess a resilver is just well loose some vdev in the same raid-z, right?
  4. G

    Proxmox VE 9 existing ZFS vdev expansion?

    i guess it could benefit cauze you will have like replica 2 instead just 1. But I am not an expert at all. Just guessing...
  5. G

    Proxmox VE 9 existing ZFS vdev expansion?

    i really don't know... Perhaps someone of the Proxmox staff can respond about it... @fiona for instance. Or @Chris
  6. G

    Proxmox VE 9 existing ZFS vdev expansion?

    My guess is that you should do zpool POOL upgrade in order to activate new features...
  7. G

    Issue with backup - TASK ERROR: connection error: not connected [ Rolling back to PVE 8 and PBS 3]

    We already ask to the people whom manage the infrastructure there, and no issue were found. Strangely enough, this issue about connection error, occurs in only one VM. But this VM right now is running in another cluster with PVE 8 and PBS 3, and everything is ok. I'll try that. Thank you for...
  8. G

    Issue with backup - TASK ERROR: connection error: not connected [ Rolling back to PVE 8 and PBS 3]

    Hi there We have 4 nodes, and one of them, is pve and pbs at same time. Everything is up to date, running pve 9 e pbs 4. When I tried to backup a vm, which has a bunch of dockers inside, I got a lot o errors, like that: 2025-10-22T18:31:08-03:00: starting new backup reader datastore 'backup'...
  9. G

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    Update 3 HPe servers... no issue... excellent performance. It's good see Proxmox try to keep up to new kernel releases... This meant more compatibility and stability as well... Thanks
  10. G

    After todays kernel update not possible to delete LXC or VM

    Exactly. That's the reason I ask to send a complete screenshot, to see if the VM/Ct is running or not.
  11. G

    After todays kernel update not possible to delete LXC or VM

    Hi there Can you send a complete screenshot and not only a partial one? Thanks
  12. G

    [GFS2] Filesystem does not mount automatically after node reboot

    Hi... Here is a systemd unit that I use for ocfs2. I hope you can use it to adapt to your needs # /etc/systemd/system/data.mount [Unit] Description=Data mount After=drbd.service After=o2cb.service After=ocfs2.service [Mount] What=/dev/mapper/mylun Where=/data Type=ocfs2...
  13. G

    VM has a delay to show its green status, despite VM is running...

    Again... Another server... This time took around 4 minutos to change from gray to green...