Alright glad to hear everything is working now. You can take a look at the journal with journalctl if you want to look at the logs from a previous boot. Take a look at the -b flag as well as it will let you look at the log beginning from previous boots. For example, journalctl -b 0 show the last...
Hey, you can pin an older kernel version as outlined in the documentation [1].
It would also be helpful if you could post the kernel panic and other logs that may help us analyze the problem.
Thanks!
[1]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot_kernel_pin
Well the short answer: Historical reasons, probably.
The long answer: Replication was introduced with a no longer supported storage backend called DRBD (not to be confused with RBD). pve-zsync was first released as a tech-preview for PVE 3.4, which was also the first release to support ZFS [2]...
Except the volume_import [1] and volume_export [2] functions, use zfs send/receive.
If you look at the other storage plugins, for example LVM Thin (which does support snapshots), there is no volume_export function. As for BTRFS, which would support something similar, it is still considered a...
As stated before: Follow the steps I linked to in my previous reply [1] on the “good” node. This will make it a stand-alone node again. Afterward, you should decommission the “dead” node.
[1]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_separate_node_without_reinstall
That depends, do you plan on bringing the “dead” node back up? Can you essentially re-install the dead node, or would you lose some important data?
It really depends on your situation what is easiest/most efficient. If you want to remove the dead node from the cluster you can use pvecm delnode...
Hi,
two node clusters aren't supported by us without a QDevice [1]. Once one node fails the entire cluster will lose quorum which will lead to exactelly the situation you are in now.
You can temporarily work around this issue you can use the following command:
pvecm expected 1
However, be...
Well, a) the software stack is and will remain open source, so it isn't tied to support contracts and b) if this is about making sure that your production and testing environments use the same packages, there are ways to ensure that (e.g., POM can help here [1,2]). For one, the packages in the...
Mixing testing and production nodes is a really bad idea. Remember that a cluster's stability depends on it being able to find a stable quorum. If you do your testing with a subset of your nodes, they may interfere with the quorum of your cluster. Which is something you don't want in a...
Wie wird den dieses Zertifikat erstellt? Meines Wissens müsste der Fingerprint nur bei self-signed Zertifikaten eingetragen werden. Sonst kann das Feld auch einfach frei gelassen werden. Dann wird das Zertifikat einfach ganz normal via OpenSSL verifiziert.
Sprich, wenn PVE irgendwie das...
Yes that is exactly my point. If you have four nodes that are properly subscribed, and you add one node that has no subscription, the entire cluster is considered as “no subscription”. So if you open a support ticket with such a cluster, your cluster will be treated as if it had no subscription...
Hi,
if your clusters are new enough you could use the experimental remote migration feature [1].
The most stable and well tested approach is probably creating a backup and restoring it on the other cluster [2]. All you need for that is basically some storage that you can either transfer...
You could make use of Remote Migration, too, though that is still experimental [1] and may not be available on your older PVE host (tho, you should probably update that anyway).
Alternatively, to move the disks, you can use zfs send/receive like so:
Shut down the VM on the sending node and...
Dug into this a bit, looks like a POSIX compliance thing [1]. But if you are already alias-ing more, which is a standard command, I'd really recommend you alias it to less -F or maybe consider alias more="LESS_IS_MORE=1 less -F". This will only paginate when there is more than one screen of...
You probably want to start using less. The man page (man more) says:
less -F will display a file like cat if it does not fill more than the entire terminal window. If it does, it will use pagination.
Ok, but this isn't comparable to actually having your PBS be destroyed, because in that case it obviously would have no knowledge of your media set or catalog for that matter. So I am not sure how comparable this really is.
If I understand you correctly, you basically just want to re-import...
Alright, maybe there is a slight misunderstanding here. You don't need the media pool to be able to restore from tape, but rather a media set. At least that's my understanding at the moment. What do you see when you run proxmox-tape media content?
No problem, please mark this thread as solved by clicking "Edit Thread" above and selecting the "Solved" prefix if you don't have any further questions.
I think you mean November not September? At least I don't see a backup from September anywhere in your provided logs. Anyway, if you are wondering why the backup "vm/101/2023-11-29T09:25:41Z" was removed, the answer is simple: the 29.11. is in the same week as the 01.12. so from that week PBS...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.