Search results

  1. G

    One backup job to multiple nodes...

    I am aware of that... But like storage, I can choose whatever node I want that storage appears, like the VMS, that did you saw on the storage.cfg. So I think this could also be there when creating a backup task.
  2. G

    One backup job to multiple nodes...

    No. I do not. It's make no sense have a PBS storage in a server which will never run any kind VM. The host is just a CEPH witness.
  3. G

    One backup job to multiple nodes...

    Here we go: dir: local path /var/lib/vz content import,iso,images preallocation off prune-backups keep-all=1 shared 1 rbd: VMS content images,rootdir krbd 0 nodes pve01-jdm,pve04-sta,pve03-sta,pve02-jdm pool VMS pbs: bkp...
  4. G

    One backup job to multiple nodes...

    Of course is telling me that something is wrong, because I do not have any VM running in pve01-dc3, hence I do not need backup any vm from that node. So, I need something to exclude that node or via WEB UI allows me to choose onlyu the nodes I want job backup. But I can't do that. Simple like...
  5. G

    One backup job to multiple nodes...

    Hi I have a cluster with 5 nodes: pve01-dc3, pve02-dc1,pve01-dc1,pve03-dc2,pve04-dc2 , but I am using only 4 nodes. So I created a job via web gui, but I noticed that I can't choose more than one node, meaning I have to create a separate jobs for each node. Is that right? I tried to do this...
  6. G

    Two-ways mirroring CEPH cluster how to?

    Hi there... I am having trouble to create two-way ceph mirroring. rbd mirror pool peer add VMS client.rbd-mirror-peer-a@node1-backup rbd: multiple RX peers are not currently supported
  7. G

    Proxmox Backup Server 4.0 BETA released!

    Oh! I see... Just like any normal local storage... I was a bit confusion... Thanks for the clarification.
  8. G

    Why I cannot remove local storage?

    Well... I talked too soon... I tried to do that in PVE 8.4 and have the same result.
  9. G

    Why I cannot remove local storage?

    Yes... I know that... But it's seems that I can do that if I am not using LVM. And I forgot to mention that I am using PVE 9 BETA1. Perhaps this feature is missing, somehow! With PVE 8 works perfect.
  10. G

    Proxmox Backup Server 4.0 BETA released!

    Is there any plan to include S3 tech to do Push Sync Job?
  11. G

    Why I cannot remove local storage?

    Hi there Fresh installation of Proxmox and /etc/pve/storage is empty. Nevertheless there is a local storage show in up in Proxmox web gui. I tried to remove it but click in the remove button does nothing. How can I remove the local storage that pointed to /var/lib/vz, for good? Thanks
  12. G

    VM Template name is changed unexpected.

    Hi I had have a VM that it's VMID was 114. This VM was deleted permanentely. I created a new VM with the same VMID, 114, and transform it as a template. After a while, the template VM, which has now the same VMID, 114, was changed it's name to the former VM. So I went ahead and change the name...
  13. G

    Proxmox VE with 6 nodes and CEPH with 2 DC

    That's ok. I will managed myself. Thank you.
  14. G

    error before or during data restore, some or all disks were not completely restored. VM 106 state is NOT cleaned up.

    @fiona I don't know exactly what's happens, but now it's works. Thank you for you help Cheers
  15. G

    SDN with evpn seems to work, but need help to understand routing...

    Hi there. I had have followed this evpn example: https://pve.proxmox.com/pve-docs/chapter-pvesdn.html#pvesdn_zone_plugin_evpn To publicity an ip range with ASN assign to then. It's works but inside the VM, the outgoing to internet it's show me the Proxmox public IP! So in that maner I can not...
  16. G

    error before or during data restore, some or all disks were not completely restored. VM 106 state is NOT cleaned up.

    Hi Well... There is no timeout whatsoever. The error message appears right when try to do a restore! It's took about 2 sec! This is the log: error before or during data restore, some or all disks were not completely restored. VM 1155900 state is NOT cleaned up. TASK ERROR: unable to create...
  17. G

    error before or during data restore, some or all disks were not completely restored. VM 106 state is NOT cleaned up.

    Hi... Sorry to talk about this topic again, but why this issues occurs? I have the same behavior. It's just a simple VM with 2 disks in NFS storage.
  18. G

    can't migrate vm unless i'm logged into that exact node

    Personally, I do not recommend this set up. It's better has different nics and different IPs. If your LAN goes south, everything will fall together. But that's another history.
  19. G

    can't migrate vm unless i'm logged into that exact node

    If you are using different IP to do migrate, perhaps you need to do ssh from and to all nodes in order to register the necessary ssh keys.