Search results

  1. A

    [SOLVED] Migration setting

    Ok, found the problem: Firewall issue
  2. A

    [SOLVED] Migration setting

    Cant find whats wrong: 2021-06-18 17:52:01 203-0: start replication job 2021-06-18 17:52:01 203-0: guest => VM 203, running => 16668 2021-06-18 17:52:01 203-0: volumes => data:vm-203-disk-0,data:vm-203-state-before_cpu_change 2021-06-18 17:52:01 203-0: end replication job with error: command...
  3. A

    pve-zsync vm include no disk on zfs

    Thanks for replying, agent: 1 bootdisk: virtio0 cores: 2 cpu: kvm64,flags=+aes description: nom machine%3A LOGIKUTCH keyboard: fr localtime: 1 memory: 4096 name: logikutch.eec31.local net0: virtio=3A:FE:2A:BE:1C:2B,bridge=vmbr31,firewall=1 numa: 0 onboot: 1 ostype: win7 parent: migration...
  4. A

    pve-zsync vm include no disk on zfs

    sorry if this is an old thread but I am having a similar issue: root@backup1 ~ # pve-zsync sync --source 172.16.1.1:200 --dest rpool/data/Daily --name pve-zsync-daily --maxsnap 4 --method ssh Job --source 172.16.1.1:200 --name pve-zsync-daily got an ERROR!!! ERROR Message: Vm include no...
  5. A

    Server migration

    ok, thanks for your answer.
  6. A

    Server migration

    Hi everybody, I need to replace 1 of my servers in a proxmox cluster. VM/LXC have been migrated already, just need proxmox configs. Are thes files enough to migrate to new server ? /etc/pve/ /etc/lvm/ /etc/modprobe.d/ /etc/network/interfaces /etc/vzdump.conf /etc/sysctl.conf /etc/resolv.conf...
  7. A

    pve-zsync stopped working

    Ok, thanks for pointing me in the right direction. Issue wasn't related to pve-zsync. I removed a user in /etc/pve/user.cfg but the user was still in admins group. Pve-zsync started to work again once removed. see below the mistake (user mars@pam): user:root@pam:1:0:::overlaps@outlook.com...
  8. A

    pve-zsync stopped working

    Hi Fabian, thanks for replying. I regenerated the ssh keys on both servers and copy the public key on each other. I am now able to ssh each of them without password but still, I get this issue: root@backup1 /home/sam # pve-zsync sync --source 172.16.1.1:100 --dest rpool/data/Daily --name...
  9. A

    pve-zsync stopped working

    Hi everybody, Since yesterday I was using pve-zsync for daily snapshot and it was working perfectly. Today it stopped working and I can't figure out what's wrong. job is to pull vm snapshot of remote server from backup server. I am able to ssh without issue. firewall is disabled at datacenter...
  10. A

    delete snapshot after recovering

    Hi everybody, I make backup of my containers everyday with pve-zsync tool. I started a container from one of these snapshost to recover from incident by importing container's config into proxmox. The container is running fine, now I am wondering if I can delete the 30 days of snapshot of this...
  11. A

    Proxmox is dirty

    Thanks Ramalama but not sure I got what you meant with that. I read some topic relaed to slab issue with proxmox but I guess it is above my competence. Just in case you can help a bit more, find attached the output of cat /proc/slabinfo
  12. A

    Proxmox is dirty

    Thanks for your reply, here is the output: root@proxmox-3:~# arc_summary -p 1 ------------------------------------------------------------------------ ZFS Subsystem Report Mon Mar 29 19:49:02 2021 Linux 5.4.103-1-pve...
  13. A

    Proxmox is dirty

    Hi everybody, Since my servers burnt in OVH datacenter I had to turn my backup servers into production servers. One is getting overhelmed and is almost stuck with ram usage going straight to 93% after reboot while there is only 3 VM running on it. I tried to use atop to understand what's going...
  14. A

    restore container from snapshot

    Worked like a charm, many thanxs
  15. A

    restore container from snapshot

    Just to make sure before making mistakes as I have no backup of the backup: On my backup server I have for example these snapshot of lxc 127: root@proxmox-3:~# zfs list | grep 127 rpool/pve-zsync/subvol-127-disk-0 1.30T 3.32T 1.26T /rpool/pve-zsync/subvol-127-disk-0 root@proxmox-3:~#...
  16. A

    restore container from snapshot

    Thanks for your help Fabian, I just learnt that 1 of the 3 nodes has burnt so I will go for the first solution.
  17. A

    restore container from snapshot

    cluster is down for sure as the other servers are impacted by ovh outage. I am not sure how to properly stop the cluster and gain acces to all features on the remaining server. I dont want to destroy the cluster as I don't know yet if the others servers are definitely gone. root@proxmox-3:/#...
  18. A

    restore container from snapshot

    Thanks for your reply, I will start them on the backup server for emergency. It looks like I cannot write in the config file location. is it because the cluster is broken ? Should I destroy it ? root@proxmox-3:/# cp /rpool/pve-config/proxmox-1/pve/qemu-server/212.conf...
  19. A

    restore container from snapshot

    Hi everybody, I am using pve-zsync tool to backup my vm/container. Since ovh datacenter burnt during the night, all of my vm/containers are gone and I only have access to my backup server. Snapshot have been made using : pve-zsync create --source 10.2.2.42:105 --name imap-daily --maxsnap 7...