This is just a self-signed certificate as we consider using HTTPS with that better than no HTTPS at all (not ideal, but probably better than nothing). If your certificate expires it's actually not much different from the self-signed cert, you will have to trust it explicitly to keep using the...
You will have to renew that certificate manually. How you renew a certificate, depends on your certificate vendor. After you got a new/renewed certificate from your vendor, you can upload it to PVE just as you uploaded your first certificate.
Automatic Renewal only works with ACME, as ACME...
Kind of, since you set the routes when the interfaces come up anyway, Corosync doesn't need to know how to communicate with a specific node as long as it can be routed to. Hope that makes sense :)
Yes so when you set up the cluster under Datacenter > Cluster > Create Cluster add an additional link and set the IP Address 10.15.15.50 from the drop-down. When joining the other nodes, the GUI will ask you for the new nodes IP addresses on all links, there you then select 10.15.15.52 or...
What do you mean by "both" links? Corosync doesn't really care how exactly you set up your Full Mesh network. When creating the cluster you can just specify additional Links and set the IP address of the node on that link from the drop-down.
If you already created the cluster, you have to...
No, it's the other way around, actually. The Mail Proxy's whitelist acts on the SMTP level before the Rule system's mail filter ever get triggered.
By mailbox, I assume you mean a specific mail address? You can, and might have to, specific those domains and addresses on both levels. Either...
Nein, das scheint im Moment nicht wirklich möglich zu sein. Es wäre in Theorie möglich die unterliegenden LVs hin- und her zu verschieben, um Sie zu verschlüsseln, aber dann müsste man immer noch den Monitoren die korrekten Schlüssel etc. beibringen. Das wäre vermutlich sehr kompliziert und die...
Did you make sure to reload the UI with CTRL+SHIFT+R?
I am pretty sure that bug should be fixed in recent versions of the GUI [1].
If the problem persists, please share a screenshot of the exact panel that has the problem and if you could also steps to reproduce the issue.
[1]...
Hey, yeah the one-off backup mask here has a bug, I send a patch already that should fix it [1]. You can check when it will be available on the mailing list. For now set a Max Depth: that's deep enough to include all your namespaces.
[1]...
As @leesteken and @gfngfn256 said: Simply remove the subscription from the old node then and transfer it to the new one. When you do that, is up to you, as long as you don't have both nodes in the cluster you should have enough subscriptions. The key point is that all nodes need to have the same...
I am not sure what you mean. The page for PBS on our homepage [1] mentions:
The feature pages list this as the first feature:
So the license is AGPLv3 and you are “free to use the software”. If that isn't clear enough the pricing page [2] also only ever talks about two things: The enterprise...
We base our kernel on the Ubuntu kernel. The exploits GitHub page states the following [1]:
The Ubuntu Repo seems to take the settings from Debian itself here [2]. We don't alter this option. So it should be set on your machine. You can easily check if that's the case with this command: cat...
Alright glad to hear everything is working now. You can take a look at the journal with journalctl if you want to look at the logs from a previous boot. Take a look at the -b flag as well as it will let you look at the log beginning from previous boots. For example, journalctl -b 0 show the last...
Hey, you can pin an older kernel version as outlined in the documentation [1].
It would also be helpful if you could post the kernel panic and other logs that may help us analyze the problem.
Thanks!
[1]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot_kernel_pin
Well the short answer: Historical reasons, probably.
The long answer: Replication was introduced with a no longer supported storage backend called DRBD (not to be confused with RBD). pve-zsync was first released as a tech-preview for PVE 3.4, which was also the first release to support ZFS [2]...
Except the volume_import [1] and volume_export [2] functions, use zfs send/receive.
If you look at the other storage plugins, for example LVM Thin (which does support snapshots), there is no volume_export function. As for BTRFS, which would support something similar, it is still considered a...
As stated before: Follow the steps I linked to in my previous reply [1] on the “good” node. This will make it a stand-alone node again. Afterward, you should decommission the “dead” node.
[1]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_separate_node_without_reinstall
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.