No the backups from the pbs are in the storage list. Available to see without having to login to the separate pbs server. The way you were looking was for single regular vzdump backups.
node: pbs-storage -> backups -> listing.....
So if you ever make a second node, just add the pbs server...
Just want to thank the proxmox team for finding a solution and fixing the legacy bios and grub. I am sure many have older systems that don't have UEFI boot but are using ZFS on the rpool.
This all works great.
I actually fresh installed 6.4 as it was an easy alternative in my scenario.
Try on both. Not sure how to remove on centos but on debian
To try: Modify /etc/default/grub
Example: GRUB_CMDLINE_LINUX_DEFAULT="quiet mitigations=off"
Then update-grub
Could this be due to linux kernel mitigations. I had a similar issue. Try turning mitigations off as a test and see if that improves your performance. At least you'll know why.
I'm running proxmox backup server in a container but I think I may have over complicated my re-install scenario which caused me to lose my data store accidently.
[ rpool - zfs 180GB ]
proxmox ve
container: 900 proxmox backup server - 16GB
[tank0]
- vm1
- vm2
[tank1]
- container...
Thanks for more info on this, but there is still confusion for me.
If we're running 6.3 created by an original ISO, on Legacy BIOS do you suggest we reinstall 6.4 from scratch to get vfat on grub?
Edit: I see post #2 has some info on trying to use the 512MB partition and initialize with...
This has been a source of confusion for me as well.
I know that during a 6.3.x (mid cycle) upgrade the rpool was offered a ZFS pool upgrade that at that time REQUIRED UEFI or it would not boot.
I was able to change my bios boot mode to UEFI then it works.
Moving forward I am confused if this...
I think the point is to make it more apparent, just make the border larger or make the text bold or something... The orange was just an idea.
I remember the first time I installed proxmox I had no idea the next button was clickable, looked ghosted.
Do you mean you ran out of memory and the OOM killer started killing tasks?
I have 128G of memory and also am wondering if I really need to commit 64G just to the ZFS arc size.
If I experiment with different host types (kvm64, qemu64, host) do I risk affecting the windows activation?
I am building a new Windows server 2019 and am curious if I try different host types will it affect my activation license.
Should I pick a specific one in this case? I do perhaps to...
I don't think rescue works with ZFS.
Just before proxmox begins booting, You should see a screen that lets you pick the kernel. It times out after a few seconds so press the arrow key to stop it and pick.
Did you get anywhere, what about error messages when attempting to start?
In the GUI hardware field for one of the VM's does it show for anything for hard disks?
qm rescan
check /etc/pve/qemu-server/
I recently installed two used supermicro servers in production, looks like they are in Legacy mode. I'm on enterprise and haven't been offered this upgrade yet. But I've been reading I might be able to switch the bios safely to UEFI mode without having to reinstall, so I will try that...
Sorry that its freezing, it may be a memory issue, hard to debug if its not really reproducible. You can boot proxmox and do the mem test during the boot screen.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.