Hello..
I am upgrading a standalone server from proxmox 7.4 to 8. I followed the instructions and pve7to8 reported no issues. I've done multiple upgrades from 7 to 8 at work, but this is for a home system, and it's the first time it failed.
After the reboot, the OS failed importing rpool...
Thanks (even though a bit late)!
I remember trying both of those things back then, but it didn't help. I ended up moving the mon to another node, but it still bugs me that I couldn't find a rational reason why there was an issue.
Hello,
I have a monitor node in our cluster that had an ungraceful reboot. After the node came up, the monitor could not join the cluster. After some retries I destroyed the monitor (from the GUI) and recreated it, but still could not join the cluster. I waited about a day "just in case" the...
It's been a while since I came across this, but I think I just ignored it. If I remember well, the error was reported because pve/data is already configured as thin-pool, so conversion to the same type is non-sensical (at least that's what lvconvert thinks).
But if you do have many nodes ...
That feature request is when communicating to a backup server. Not sure if things are different when using the included backup utility to backup to local ceph storage.
Oh yeah, about that. it's nice that there is an option to select a single node, but it...
Ok, understood.. However, in order to make the GUI more user-friendly I would recommend the following:
Currently, in the edit backup job popup, there is a tick mark on the left column that allows you to select all container types in one click. That gives the impression that it's ok to do...
Yes, I am backing up multiple nodes at the same time. This used to work, however. Things used to grind to a halt, but eventually succeeded. Now, they just hung.
Out of pveversion -v below
# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.64-1-pve)
pve-manager: 7.2-11 (running...
Hello,
I have a problem when backing up to a ceph cluster of spinning disks.
I have a cluster of 27 server-class nodes with 60 OSDs on a 10gig network. If I backup ~10 VM/CTs it works fine. Upping that number to ~20 the backup grinds to a halt (write bandwidth in the KB/s range) but...
No inconsistencies.. There is some initial delay bringing the OSDs up until they catch up, but that's the same you would get if a node is down for some time and the OSDs have to catch up. It takes me less
I know exactly what you mean about green status. There's always something...
It may...
Turns out clonezilla is not the solution. It gets confused with the tmeta partitions and croaks, even in "dd" mode.
What I ended up is the following:
- make sure there is no mgr/mds/mon daemons and no VMs/CTs on the node
- tar up /var/lib/ceph on the old drive and store it somewhere on the...
clonezilla is probably what I will use, since I already have it on our PXE server and we've been using it for cloning and backing up physical machines for quite some time. I am just not sure how to resize the tmeta partitions that Proxmox uses..
Along the same lines, I have a similar issue:
I want to replace the Proxmox boot drive with a larger one. It looks like the easiest solution is to:
(a) migrate VMs/CTs to different nodes
(b) shut down machine and update HW
(c) re-install Proxmox on the new boot drive
(d) remove "old" instance...
Did some digging and found out that these are spurious logs for thermal prochot throttling.
Please take a look at the following link:
https://www.spinics.net/lists/kernel/msg4380894.html
I can verify that the following command:
# wrmsr -a 0x19c 0x0a80
indeed silences the spurious messages...
@Bruno Félix , if you are still seeing this, can you please verify if the nodes that are having this problem also have an Intel Omnipath HFI card installed? If not that, maybe some other fabric card?
We see this only on machines that have HFI cards.
Thanks for writing this.
However, the last lvconvert command gave me an error:
# lvconvert --type thin-pool pve/data
Command on LV pve/data does not accept LV type thinpool.
Command not permitted on LV pve/data.
#
Any thoughts?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.