I can't remember but probably no motivation other than keep it simple.
Can I can change it with a simple edit of corosync.conf with no other side effects? Seems like it when I read the man page.
pve 7to8 script is complaining because I have secure auth "off". is it really necessary?:
Cluster information
-------------------
Name: ----
Config Version: 8
Transport: knet
Secure auth: off
Quorum information
------------------
Date: Mon Apr 22 12:00:22...
Happened again despite setting up a second, redundant cluster network. Also I replaced the single ethernet connection between switch and backup server (proxmox3) with a bonded connection.
Is it possible to tell anything from the log of the other 2 nodes? Although I see complaints about...
Hi. Thanks for your reply. The info is below. I have a feeling my config is ok but let me know if not. Yes I will observe closely during backup. Looking at my hosts file I think it should not be a problem. I don't have the cluster network addresses named at all. They are only referred...
Yes. I do have a separate cluster network on its own nic. I put some diagnostic output below - they seem to look good? My non-cluster traffic is on a different interface which is a bond of 2 separate nics. Is it possible that my system is trying to backup across my cluster network? Does...
Since:
Until:
Happened during the night. Hopefully this log is useful - it is a bit opaque to me.. Server rebooted and seems normal again today... I think. Any insight is much appreciated especially if hardware is failing. I like to get right no top of that :)
Feb 05 23:01:43 proxmox1...
ah ok. I suspected am issue with the cluster network but wasn't too sure. I think there may be a hardware issue in there. Thanks for pointing me in the direction.
(I have 3 nodes with dedicated cluster network )
Sever rebooted during the night. I am wondering if anyone can explain this log?
un 24 00:00:02 proxmox1 pvescheduler[1417502]: <root@pam> starting task UPID:proxmox1:0015A121:0D62F8AD:64969472:vzdump:103:root@pam:
Jun 24 00:00:03 proxmox1 systemd[1]: Starting Rotate log files...
Jun 24 00:00:03...
Hi thanks for your interest. It happened in two different versions of ubuntu:
ubuntu 18.04 with qemu-guest-agent Installed: 1:2.11+dfsg-1ubuntu7.42
ubuntu 22.04 with qemu-guest-agent Installed: 1:6.2+dfsg-2ubuntu6.9
Hi, this happened on multiple ubuntu vm's this morning. Had to kill qemu-ga on them to quiet it down. I don' see the "idle=poll" option in /etc/default/grub... would it be located elsewhere or is this solution outdated?
I mean how were you accomplishing replication? There are different ways (technology).
Two servers is enough to provide redundancy and failover but you need a third to provide "quorum".
You must be a small business person like me. Maybe you can tell by the silence that they are not interested in you.
That being said, your use case seems a little unusual.
I don't think you need to worry about unwanted failover if you have HA disabled. Without failover then there is no need...
ok but you used to -> https://pve.proxmox.com/mediawiki/index.php?title=DRBD&oldid=9622
So, how about recreating the guide using the current favorite ceph? O can I get a link?
Maybe I should simplify my question? Proxmox is a custom kernel, right? I am just asking what version of drbd is in the latest pve? Is it still drbd 8 I hope?
Hello. I want to know if pve 7 contains drbd 8 or some other version? (or none)
Last year I upgraded my pve 5 to pve 6 using "in place" and my drbd was broken. I tried hard to fix it but had to give up.
So now I am getting back to this and am thinking I will reinstall pve from an iso. Might...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.