Hi,
have you run proxmox-tape catalog after putting a tape you want to restore into the drive? This will re-import your tape catalog and then you can restore your backups from there. Also see manual for more information [1].
[1]: https://pbs.proxmox.com/docs-2/tape-backup.html#restore-catalog
If you can, please post your solution and mark this thread as “Solved”, by clicking “Edit Thread” at the top and selecting the right prefix. This can help other users who run into this issue too.
How did you create the VM? Also can you post the full configuration of that VM? If you edit the ide0 entry and switch it to "Do not use any media", can you start the VM then?
The reason is fairly simple: ZFS is favored over BTRFS, because BTRFS is still considered a technology preview. BTRFS also still has a couple of showstopper issues, which is why we don't support it at the same level (see e.g., [1]). However, backups are a fundamental feature, and they need to...
Hast du die VM in Schritt 4 ge-rebootet (nach dem parted Befehl)? Die dritte Partition ist immer noch nur 9G groß:
weshalb lvresize bzw. resize2fs keinen Effekt haben:
Well, hard to tell what is wrong here, since we don't know how exactly you patched the kernel and which version you used. Also, what does the VM's config look like? Can you please test it with the proxmox-kernel-6.2.16-19 kernel in the no subscription repo?
1. eigentlich sind wir nicht begeistert, wenn man alte Threads einfach so wiederbelebt, vor allem weil 2. das ein anderes Problem ist: Die Fehlermeldung sagt schon alles, es gibt auf der VM keine „pmg“ Volume Group. Wenn ich’s richtig sehe, handelt es sich hier nicht um eine PMG VM, sondern um...
Our “fix” is a back port of these patches [1] provided by Sean Christopherson on the KVM/Linux kernel mailing list. We don't have any say over what ultimately ends up in the kernel, but if these patches are accepted, then yes future mainline kernel releases will include them too. I'd recommend...
There isn't really a straight forward way to do that. From what I can tell this is a Windows 10 VM? I tried to look around what is currently the best way to find misaligned memory access from a user's perspective (not a developer's!). However, I haven't found much, especially not in cases where...
Just an update, patches have since been submitted upstream [1]. I've tested them, and they work, you can find our backport here [2]. It might still be a little while until they are released with a new version of our kernel, though.
[1]...
Going off of the provided information, I can't really say much. Can you please provide the VMs configuration as well as the output of pveversion --verbose.
These bugs affect the actual CPUs that your system uses. The issues arise from modern CPU's usage of “speculative execution”. This does not...
Likely yes, but I couldn't test that yet, I can't give you any guarantees. If you test it, and it doesn't work, please post any error messages you may see.
The cause is already known, as mentioned before the CPU flag FLUSHBYASID is not properly exposed by KVM. A bogus check was merged upstream...
Downgrade to kernel 5.15 should work, but 5.15 is not officially supported with PVE 8. You can still use PVE 7 as we still support 5.15 with PVE 7. As for a fix for kernel 6.2, we are working on it, but can't make any promises as to when that may land.
Yes, sorry if this was somewhat misleadingly phrased. By “schedule its tasks and process more efficiently” I basically meant that each process should then be able to use the same NUMA domain as the resources it needs (e.g., memory).
As I said before, if you want your VM to have 4 total Cores (not vCPUs, that's again different) and the host has 2 NUMA nodes you should configure the VM as such:
In the advanced CPU settings enable NUMA.
Set the number of sockets to match the host system, so: 2
Set the number of cores to...
Es sollte sich dabei um einen ganz normalen Score handeln, also ja [1]. Allerdings empfehlen wird das Abändern von Scores nicht, meist handelt es sich da schon um sehr gut ausgewählte Scores.
[1]: https://pmg.proxmox.com/pmg-docs/pmg-admin-guide.html#pmgconfig_spamdetector_customscores
Well alright, doesn't make a difference in this case ;)
Yes it will be faster than whatever HDD-based solution you are using right now. If you are currently using VZDump and are fine with its performance, this should probably work for you.
That would probably be better, yeah. It really depends on what disks you choose, how many nodes there are in your Proxmox VE cluster and how often you want to create backups.
Yes you should also use only enterprise SSDs for the backup server too. However, if you only use a 1Gbit/s connection between the Proxmox VE hosts and the Proxmox Backup Server, then that may become a bottleneck. Depending on your requirements this may mean that your backups will take longer...
Depends, you could use Let's encrypt certificates. Alternatively you could also set up your own CA and use self-signed certificates. You can read more about that in the manual [1].
[1]: https://pbs.proxmox.com/docs/sysadmin.html#certificate-management
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.