Search results

  1. G

    pveproxy stuck

    BTW I tried various commands, such as systemctl restart pveproxy pvedaemon pvecm updatecerts all hang.
  2. G

    pveproxy stuck

    Greetings In the first days of this new year, my Proxmox cluster is in bad shape... In one node, "pveproxy" is badly stuck: root 15639 0.0 0.5 295812 89408 pts/26 D 2021 0:00 /usr/bin/perl -T /usr/bin/pvesr status root 24233 0.0 0.5 283276 83712 ? Ds 2021 0:00...
  3. G

    pve-zsync: job is already scheduled to sync

    Greetings A few monthes ago, I've set up pve-zsync: SOURCE NAME STATE LAST SYNC TYPE CON 114 default syncing 2021-11-19_08:00:01 lxc ssh Since 3 days ago, I get a lot of messages in the logs: Job...
  4. G

    /etc/pve distributed filesystem

    Thanks! do you think I could use it for my use case? Does it rely on something else than Corosync?
  5. G

    /etc/pve distributed filesystem

    Greetings If my understanding is correct, /etc/pve is a fuse-based distributed filesystem. What is the software behind it? is it possible to use this software for other needs? (in my case, replication of Nginx configuration files between several hosts). I mean, not using /etc/pve but using the...
  6. G

    Adding new node v7 to v6 cluster

    Greetings I have a cluster of 7 nodes of Proxmox v6. I have a new machine that I installed with Proxmox v7 that I would like to add the cluster to start the v6->v7 migration of the cluster. When I do: # pvecm add OLD_NODE --link0 MY_IP --use_ssh yes copy corosync auth key stopping pve-cluster...
  7. G

    "pct list" time out

    Greetings On my new Proxmox 6.4.13, something got "stuck": pct list or any service pve-<whatever> stop hang. In the log I see lines like: systemd[1]: pvestatd.service: Stopping timed out. Terminating. scwv10 systemd[1]: pvedaemon.service: State 'stop-sigterm' timed out. Killing scwv10...
  8. G

    ZFS volume naming (for replication)

    Thanks for your answer. I'd like this to be part of the "natural" replication process so I'd prefer to avoid using CLI. What do you suggest? renaming the volumes or pools? or maybe something more clever? I have a big backup server which I cannot use because of that :( TIA Regards
  9. G

    Manually initiating zfs replication?

    I have the feeling it's correlated to the bug I refered here: https://forum.proxmox.com/threads/pct-listsnapshot-strange-output.86675/ because at some point I see this: Deep recursion on anonymous subroutine at /usr/share/perl5/PVE/GuestHelpers.pm line 165.
  10. G

    ZFS volume naming (for replication)

    Greetings I have a cluster where the nodes are on ZFS. I noticed that the target of a replication job must have the same ZFS configuration as the source. Also I noticed that I can "alias" a zfs volume in /etc/pve/storage.cfg, ie specify to which actual zfs volume a "proxmox zfs name" correspond...
  11. G

    Manually initiating zfs replication?

    At the exact time of the failure, I see this in the syslog: pvesr[26445]: OK
  12. G

    Manually initiating zfs replication?

    Well I was too optimistic... the process stopped after 7 hours and 73.6G: send/receive failed, cleaning up snapshot(s).. command 'set -o pipefail && pvesm export ct_A:subvol-134-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_134-1_1622564262__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o...
  13. G

    Manually initiating zfs replication?

    After a lot of investiguation, I think I found the problem: the net interface had an MTU of 1500 ; after I changed the MTU to 1437 on all nodes, it seems to work a lot better. Maybe this could help someone someday :)
  14. G

    Manually initiating zfs replication?

    Thanks for your answer. In fact I did as you said: I've created the job on Friday night, unfortunately, one week later, it's still "looping" ; may guess is that if it takes more than 24h, the destination will be erased and re-transfered. So limiting bandwidth will probably make things worst. I...
  15. G

    Local node times out

    I just restarted the daemon, it was stuck somehow, everything else was working ok.
  16. G

    Manually initiating zfs replication?

    Greetings I have Proxmox 6.3 zfs-based cluster. Last week I added a new node and set a few replication from the old nodes to this new node. It works ok for the small nodes (a few gigs), but for a moderate node it seems to loop: yesterday, the destination zvol was 20G, today it's only 16G. The...
  17. G

    "pct listsnapshot" strange output

    Thanks for you answer :) I indeed have a huge amount of snapshots, since my script broke a while ago. This weekend I ended up with this: pct listsnapshot $ctID| grep "daily_$year" 2>/dev/null | tr -cs '[[:digit:]]\n' ' ' |awk '{print $1}' > zfs_list_$ctID_$year.lst (visually check the output...
  18. G

    Migrating VM via private IP instead of public IP

    Thanks for the very concise answer :D
  19. G

    "pct listsnapshot" strange output

    Greetings I have a cleanup script based on "pct listsnapshot" which stopped working a while ago. While running manually, I found this output: `-> daily_20181123 2018-11-23 04:30:07 no-description `-> daily_20181124 2018-11-24 04:30:07 no-description `->...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!