Ah, I see this particular regex is automatically anchored. Then yeah, you'll need to use this one instead .*\.lol, which will be anchored and become this regex ^.*\.lol$. So it will match the beginning of a line (“^”), then any character zero or more times (“.*”), then the string “.lol” and then...
Hey,
yeah, that is intentional. The command will use the enterprise repo by default. Which does not work if you don't have a subscription. You could try pveceph install --repository no-subscription instead to use the no-subscription repo.
[1]: https://pve.proxmox.com/pve-docs/pveceph.1.html
Hallo!
Damit das klappt, muss der Bootloader mit einem bestimmten Key signiert werden. Da leider bei den meisten Systemen nur Public Keys von Microsoft hinterlegt sind, muss der Bootloader auch mit dem entsprechenden Private Key signiert werden. Um dem ein bisschen auszuweichen, kann man eine...
The regex you posted demands at least one subdomain. So “lockermaster.lol” does not match this regex, but “a.lockermaster.lol” does. Also note that .* at the beginning can make this regex very inefficient.
Small break down of the regex:
.*: match any character as often as possible. This leads...
This should be fine and yes this is exactly what we recommend in this case [1]:
So just run pvecm updatecerts.
[1]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_remove_a_cluster_node
The reason we mention that for hyperconverged clusters is, that PVE 8 does not support any other Ceph versions currently. Hence, updating to Quincy after upgrading results in an untested and unsupported setup. So we can't give any guarantees about how well that will work. Additionally, our...
The main problem you'd run into is likely live migration from newer systems back to older systems. This is generally not supported and may result in crashes or freezes of your VMs.
There may also be new features available on newer nodes that won't work as intended if a VM is migrated back. You...
From the commands you provided I can only see that rpool ist mounted at /rpool and rpool/ROOT is mounted on /. So unless you specifically mounted something else there /var/lib/vz would be part of rpool/ROOT, which is automatically mounted by ZFS. To get more info about what is mounted where, you...
Grundsätzlich haben wir mit Swap nicht die besten Erfahrungen gemacht. Was für ein Filesystem wird denn verwendet? Bzw. wie sieht es mit RAID aus? Wie viel RAM hast du denn zur Verfügung?
Hallo!
Wieso möchtest du denn das so einrichten? Also welche Erwartungen hast du den an so ein Setup und bist du dich sicher, dass das der beste Weg ist das zu erzielen?
Das sollte angezeigt werden, wenn du die VM auswählst und dort dann auf das Summary gehst. Da gibt es dann eine Kachel wo der Status, Name, CPU, RAM und Storage verbrauch angezeigt werden sollte.
Hey,
yeah, this was changed fairly recently [1]. Ideally for Corosync you don't want a bond, you should just hand over the interfaces to Corosync directly. It will handle switching the links better than if you give it a bond.
[1]: https://git.proxmox.com/?p=pve-docs.git;a=commitdiff;h=4ab400d1
Hey,
first off, the PVE config is not identical to the PBS one, so it may not work as expected. PVE and PBS use different LDAP implementations. Secondly, I am not sure what you are trying to do in your last post, but you don't need to add the PAM or PVE realms to your...
Since this fix is taking longer than I expected (sorry for that), you can use this work around for now:
Edit the file /etc/proxmox-backup/domains.cfg like so
ldap: <realm-name>
base-dn <base-dn>
bind-dn <bind-dn>
mode <ldap|ldaps|ldap+starttls>
server1 <server>
server2...
Ah it seems you ran into an issue with the new flow for creating LDAP realms. Sorry this is a bug it seems. The query used to check whether the LDAP connection would work seems to exceed a size limit and fails. The part handling this logic only checks whether the query succeeds, not why it...
Danke für das Log und die Paketversionen. Kannst du evtl. noch die folgenden Informationen posten, das würde uns beim Reproduzieren vermutlich helfen:
Die Konfiguration des Syncjobs (findest du in /etc/proxmox-backup/sync.cfg).
Die Konfiguration des Remotes, bitte hier nicht vergessen...
Yeah, you have a typo there. It should be “address” not "adress". Try changing that, and then you should be able to do systemctl restart networking.service or ifup -a to bring up your network.
Hi,
could you post the contents of the file /etc/network/interfaces? Ideally between tags to preserve the formatting.
From the log that you showed you didn't specify an address for vmbr0 so it could not be brought up. Take a look at the “Network Configuration” section [1] of the manual for...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.