Hi guys,
I always run into the issue on different sites, that my debian LXCs (bookworm f.e.) seem to run into a long delay before going on at this point of the update process:
Can someone tell me what I have to change that it won't sit there for 15-30 sec before going on with the update...
Hey guys,
I was running a backupjob for a couple months now for my whole environment, ofc successfull :)
Now I started a new node in my cluster and moved over my nextcloud LXC to it.
Nextcloud is working but the backup is failing one day after an other:
INFO: Starting Backup of VM 47005 (lxc)...
did you install corosync on all of your nodes before starting?
apt install corosync-qdevice
see here https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_qdevice_net_setup
Hey guys, sry for not responding here.
I was able in the past to get it up and running but now (maybe it is because PiKVM did something on the OS that I can't figure out) it is not working anymore.
So what I did to get it up and running was basically this what @oguz told me =>...
here we go... something with cert
root@pve01:~# proxmox-backup-client version --repository 192.10.10.251:backup
Password for "root@pam": ########################
certificate validation failed - context depth != 0
could not connect to server - error trying to connect: error:0A000086:SSL...
@fiona
Ok this is the output (same as on pvesm status in my opinion:
Jan 29 17:34:37 pve01 pvestatd[1065]: backup: error fetching datastores - 500 Can't connect to 192.10.10.251:8007
I can share, but we have to wait a couple of hours until I'm back home ;), will edit this post with the output.
On the pvesm status there are entries for the PBS share with the "default" 500 error that it can't connect to it.
Hi fiona,
on pvesm status all external storages are marked as "inactive".
I'm able to ping PBS, I can do telnet IPADRESS 8007 to the PBS and I can ssh to the PBS from the node that is not working.
Hey guys,
have a small problem on a friends environment, I have created a complete new Proxmox cluster on his site with 2 nodes. One of them was his old server that was hosting everything, Yes a virtualized ProxmoxBackup as well (I know that it is not recommended, but no separate hardware for...
correct
thats why migration with rightclick is that fast, ok.
correct
root@pve03:~# ha-manager status
quorum OK
master pve03 (active, Sat Jan 20 08:10:08 2024)
lrm pve01 (active, Sat Jan 20 08:10:05 2024)
lrm pve02 (active, Sat Jan 20 08:10:06 2024)
lrm pve03 (active, Sat Jan 20 08:09:58...
thank you for clearing this up :)
for your last question: I have a 3 node cluster that looks like this:
It is a full mesh setup so 3 nodes are connected to each other.
Hey guys,
after I was able with the help of this forum and googling to get rid of all my issues with Proxmox HA Cluster over Thunderbolt 4 and an issue belonging to NTP that hold me back on enabling CEPH propery... I'm now working with my new cluster.
After my first LXC/VM was setup I tryed to...
So I guess you want to teach me I should use an other MTU? is also the value from the guide
The time did differ to my local time over 8hours, but how much the difference between all 3 nodes was, I didn't take a look at.
your command succeeded as well, this was/is the output:
root@pve01:~# ping -M do -s 3972 10.0.0.82
PING 10.0.0.82 (10.0.0.82) 3972(4000) bytes of data.
3980 bytes from 10.0.0.82: icmp_seq=1 ttl=64 time=0.527 ms
3980 bytes from 10.0.0.82: icmp_seq=2 ttl=64 time=0.483 ms
3980 bytes from...
just edited my post above, right now timesync is not working :) will report back.
I did tests like ping/ssh from one node to each other, working on all 3 nodes, but this was also the case with @scyto documentation
Ok adjusted everything, same problem, now it is done like documentation of your proxmox link. Here for comparison the new frr file:
root@pve03:~# cat /etc/frr/frr.conf
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's configuration...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.