Hi everybody,
I need to replace 1 of my servers in a proxmox cluster.
VM/LXC have been migrated already, just need proxmox configs.
Are thes files enough to migrate to new server ?
/etc/pve/
/etc/lvm/
/etc/modprobe.d/
/etc/network/interfaces
/etc/vzdump.conf
/etc/sysctl.conf
/etc/resolv.conf...
Ok, thanks for pointing me in the right direction.
Issue wasn't related to pve-zsync.
I removed a user in /etc/pve/user.cfg but the user was still in admins group.
Pve-zsync started to work again once removed.
see below the mistake (user mars@pam):
user:root@pam:1:0:::overlaps@outlook.com...
Hi Fabian, thanks for replying.
I regenerated the ssh keys on both servers and copy the public key on each other.
I am now able to ssh each of them without password but still, I get this issue:
root@backup1 /home/sam # pve-zsync sync --source 172.16.1.1:100 --dest rpool/data/Daily --name...
Hi everybody,
Since yesterday I was using pve-zsync for daily snapshot and it was working perfectly.
Today it stopped working and I can't figure out what's wrong.
job is to pull vm snapshot of remote server from backup server.
I am able to ssh without issue.
firewall is disabled at datacenter...
Hi everybody,
I make backup of my containers everyday with pve-zsync tool.
I started a container from one of these snapshost to recover from incident by importing container's config into proxmox.
The container is running fine, now I am wondering if I can delete the 30 days of snapshot of this...
Thanks Ramalama but not sure I got what you meant with that.
I read some topic relaed to slab issue with proxmox but I guess it is above my competence.
Just in case you can help a bit more, find attached the output of
cat /proc/slabinfo
Thanks for your reply,
here is the output:
root@proxmox-3:~# arc_summary -p 1
------------------------------------------------------------------------
ZFS Subsystem Report Mon Mar 29 19:49:02 2021
Linux 5.4.103-1-pve...
Hi everybody,
Since my servers burnt in OVH datacenter I had to turn my backup servers into production servers. One is getting overhelmed and is almost stuck with ram usage going straight to 93% after reboot while there is only 3 VM running on it.
I tried to use atop to understand what's going...
Just to make sure before making mistakes as I have no backup of the backup:
On my backup server I have for example these snapshot of lxc 127:
root@proxmox-3:~# zfs list | grep 127
rpool/pve-zsync/subvol-127-disk-0 1.30T 3.32T 1.26T /rpool/pve-zsync/subvol-127-disk-0
root@proxmox-3:~#...
cluster is down for sure as the other servers are impacted by ovh outage.
I am not sure how to properly stop the cluster and gain acces to all features on the remaining server.
I dont want to destroy the cluster as I don't know yet if the others servers are definitely gone.
root@proxmox-3:/#...
Thanks for your reply, I will start them on the backup server for emergency.
It looks like I cannot write in the config file location. is it because the cluster is broken ?
Should I destroy it ?
root@proxmox-3:/# cp /rpool/pve-config/proxmox-1/pve/qemu-server/212.conf...
Hi everybody,
I am using pve-zsync tool to backup my vm/container. Since ovh datacenter burnt during the night, all of my vm/containers are gone and I only have access to my backup server.
Snapshot have been made using :
pve-zsync create --source 10.2.2.42:105 --name imap-daily --maxsnap 7...
old but interesting topic...
one thing I don't get with your solution is that If I create 4 dataset (15min / daily / weekly / monthly), the first backup in each dataset will be a full copy of the VM and then I'll have a rotation of snapshot. Am I correct ?
Thanks for your reply,
I can't answer anymore, I had to restore the container from snapshot to allow customers to work.
This happend on 2 containers running ubuntu 20.04LTS, samba DC and nextcloud after I updated and rebooted proxmox.
The containers were able to start but Samba AD DC was broken...
Hi everybody,
I hava a weird issue on one of my container. I upgraded it to ubuntu 20.04LTs a few days ago, everything was fine. Last night I rebooted the prowmow and this morning the container wont run properly. Here is what I get from syslog:
May 20 08:39:02 srvdc kernel: [ 707.472536]...
Hi everybody,
I am trying painfully to setup a nfs server with kerberos authentication following thi howto: NFSv4Howto
When I try to issue the command: modprobe rpcsec_gss_krb5
I get the following error:
modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.