Greetings
In the first days of this new year, my Proxmox cluster is in bad shape...
In one node, "pveproxy" is badly stuck:
root 15639 0.0 0.5 295812 89408 pts/26 D 2021 0:00 /usr/bin/perl -T /usr/bin/pvesr status
root 24233 0.0 0.5 283276 83712 ? Ds 2021 0:00...
Greetings
A few monthes ago, I've set up pve-zsync:
SOURCE NAME STATE LAST SYNC TYPE CON
114 default syncing 2021-11-19_08:00:01 lxc ssh
Since 3 days ago, I get a lot of messages in the logs:
Job...
Greetings
If my understanding is correct, /etc/pve is a fuse-based distributed filesystem. What is the software behind it? is it possible to use this software for other needs? (in my case, replication of Nginx configuration files between several hosts). I mean, not using /etc/pve but using the...
Greetings
I have a cluster of 7 nodes of Proxmox v6. I have a new machine that I installed with Proxmox v7 that I would like to add the cluster to start the v6->v7 migration of the cluster.
When I do:
# pvecm add OLD_NODE --link0 MY_IP --use_ssh yes
copy corosync auth key
stopping pve-cluster...
Greetings
On my new Proxmox 6.4.13, something got "stuck": pct list or any service pve-<whatever> stop hang. In the log I see lines like:
systemd[1]: pvestatd.service: Stopping timed out. Terminating.
scwv10 systemd[1]: pvedaemon.service: State 'stop-sigterm' timed out. Killing
scwv10...
Thanks for your answer.
I'd like this to be part of the "natural" replication process so I'd prefer to avoid using CLI.
What do you suggest? renaming the volumes or pools? or maybe something more clever?
I have a big backup server which I cannot use because of that :(
TIA
Regards
I have the feeling it's correlated to the bug I refered here: https://forum.proxmox.com/threads/pct-listsnapshot-strange-output.86675/
because at some point I see this:
Deep recursion on anonymous subroutine at /usr/share/perl5/PVE/GuestHelpers.pm line 165.
Greetings
I have a cluster where the nodes are on ZFS. I noticed that the target of a replication job must have the same ZFS configuration as the source.
Also I noticed that I can "alias" a zfs volume in /etc/pve/storage.cfg, ie specify to which actual zfs volume a "proxmox zfs name" correspond...
Well I was too optimistic... the process stopped after 7 hours and 73.6G:
send/receive failed, cleaning up snapshot(s)..
command 'set -o pipefail && pvesm export ct_A:subvol-134-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_134-1_1622564262__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o...
After a lot of investiguation, I think I found the problem: the net interface had an MTU of 1500 ; after I changed the MTU to 1437 on all nodes, it seems to work a lot better.
Maybe this could help someone someday :)
Thanks for your answer. In fact I did as you said: I've created the job on Friday night, unfortunately, one week later, it's still "looping" ; may guess is that if it takes more than 24h, the destination will be erased and re-transfered. So limiting bandwidth will probably make things worst.
I...
Greetings
I have Proxmox 6.3 zfs-based cluster.
Last week I added a new node and set a few replication from the old nodes to this new node. It works ok for the small nodes (a few gigs), but for a moderate node it seems to loop: yesterday, the destination zvol was 20G, today it's only 16G. The...
Thanks for you answer :) I indeed have a huge amount of snapshots, since my script broke a while ago. This weekend I ended up with this:
pct listsnapshot $ctID| grep "daily_$year" 2>/dev/null | tr -cs '[[:digit:]]\n' ' ' |awk '{print $1}' > zfs_list_$ctID_$year.lst
(visually check the output...
Greetings
I have a cleanup script based on "pct listsnapshot" which stopped working a while ago.
While running manually, I found this output:
`-> daily_20181123 2018-11-23 04:30:07 no-description
`-> daily_20181124 2018-11-24 04:30:07 no-description
`->...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.