Hello,
We have the following problem and I have no more ideas what could be wrong
. I can no longer reach two of our three hosts in a small cluster at Hetzner via IPv6. In my opinion, everything is set up correctly and it has already worked as it is set up. We have split up a larger cluster...
I found the issue. I was able to start the rescue mode and I found that in /etc/passwd the log in shell for root was set to "exit". I have no idea how and when this happened. However everything is working again.
I found the issue. in /etc/passwd the shell for the root user was set to "exit". I have no Idea how and when this happend but I'm able to log in again.
Hello, somehow my previous post is gone so another try:
after some problems with an upgrade from 7 to 8 I decided to reinstall an restore all vms from the backup. Everything went well untill I rebooted the new Proxmox machine. Now I cannot log on anymore. If I try to log on from the web ui I...
Hello,
after the not booting problem after upgrading from 7 to 8 I decided to reinstall Proxmox and restore all VMs from backup. This worked well, it's added to a cluster, all vms worked fine until I restarted the new host. Now I can't log on anymore I get this error message:
Linux Europa...
Yes I can start into the rescue shell
`lsblk`
no zpool but I can handle that when pve is starting with network again :)
`journalcontrol -b` - sorry I can't copy 'n paste due to limited remote console
this is bc the recovery mode right?
all disks
Hello :)
I've upgraded one Proxmox Server from 7 to 8. After the upgrade the server won't start anymore. There are ACPI errors but I am sure these errors been there sind Proxmox 7 and it was starting with no Problem.
It just halts forever in this state.
ACPI Error: No handler for Region
ACPI...
Hello,
I was able to fix this with setting mds_dir_max_commit_size to 80.
ceph config set mds mds_dir_max_commit_size 80
ceph fs fail <fs_name>
ceph fs set <fs_name> joinable true
I found help in the ceph issue tracker https://tracker.ceph.com/issues/58082
Hello,
I have the same issue after the last updates und a reboot.
Jan 11 12:48:56 mimas systemd[1]: Started Ceph metadata server daemon.
Jan 11 12:48:56 mimas ceph-mds[2607691]: starting mds.mimas at
Jan 11 12:49:26 mimas ceph-mds[2607691]: 2023-01-11T12:49:26.447+0100 7fcdc0f9c700 -1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.