I commented the line that mount sdb1 in /etc/fstab and it allowed the server to complete the boot process.
Thank you for the hint.
Just not sure why it suddenly could not see the usb backup drive that was attached....
The output of journalctl -b is way too long and I don't have a way to copy it easily.. I am accessing the server via an old KVM as it is at a datacenter.. I can only take screenshots... if there is a way to filter even more let me know.
Here is the output of systemctl status
How do I run a...
This is what I have access to.. any commands that I can run in emergency mode that I can provide the output of to make sense of this?
Would the output of journalctl -xb help, or any particular parameters to pass to filter out what you want to see?
I just updated the packages through the proxmox 6.x (I think 6.4 or 6.5) web interface and it caused the lxc containers to stop working properly... so I rebooted the proxmox node and now it won't come back up. Here is the screenshot of where the boot up process stops in emergency mode. I am not...
Your suggestion worked. Thank you.
(I plan to upgrade but it is a time consuming task to deploy a new server with the latest CentOS and then transfer existing application to it. and keep things working... I wish there was a way to upgrade directly from within CentOS 5)
I migrated some CentOS openvz containers from Proxmox 3.4 to Proxmox 6 and they would not restore at all.
I even tried this PATCH on /usr/share/perl5/PVE/LXC/Setup/CentOS.pm but it still gives me the same error and I am unable to restore my CentOS 5.11 containers...
I keep getting this error...
Yes. What I am looking for are instructions how to reload the kernel so I can successfully boot because I currently have a kernel panic and unable to boot into the os at all.
And my server was still on Proxmox v3 I think, and with vz containers.
How did you install this Kernel? I have a kernel panic and the system won't boot an I can't find instruction on how to re-install the pve kernel so it can boot.. Let me know. Thanks.
The proxmox node is in a datacenter. The datacenter had given me a few public IP ranges/blocks that I can use with my virtual servers.
Here is what the interface looked like before the upgrade. Can you please show me what it should look like in proxmox 5?
For example, in addition to a static...
I upgraded a server from proxmox v3 to v5 via reformatting. Then I added in the network interface file the same IP config from v3. I am able to access proxmox interface however the additional IP ranges that were on the host interface are no longer working for the containers.
I have read the...
Why double entry? Wouldn't this suffice if I wanted to add the IP range which contains the IP 192.168.16.2 ?
auto vmbr10
iface vmbr10 inet static
address 192.168.16.2
netmask 255.255.255.0
I do not understand what you mean. Could you please paste what it would look like instead?
And is there a tutorial on how to do this as well from the GUI?
Thank you.
This doesn’t make sense. RAID0 is striping and requires a minimum of 2 drives.
The point of my question was to understand if the additional drives should be left untouched for Ceph to manage if if they should be raid-ed somehow.
Thank you for your reply.
Q7: Can Ceph automatically grow the pool as you add more server nodes?
Q8: Also do I understand it correctly that if i plan to use Ceph then I should not install proxmox with/on ZFS RAID10?
Q9: If I have 10 drive bays, what storage config would be recommended?
-...
I am confused. Back in proxmox 3 in order to add additional IPs from a different block to my containers, all I had to do was to add the following to the Proxmox network interface:
auto vmbr0:1
iface vmbr0:1 inet static
address 123.456.789.012
netmask 255.255.255.0
This doesn't...
Thank you Chris,
But if this replaces the NAS, wouldn't ceph running off of all 3 nodes create major overhead on the servers and network bandwidth?
3. Any known benchmark showing how much additional memory, cpu and network bandwidth Ceph takes to keep all storage in sync?
4. Also Isn't Ceph...
Q8: So how would ceph potentially work in Proxmox HA Cluster?
Would it be to setup 3 Proxmox servers, and combining each server disk/RAID into a ceph storage in such a way that the cephFS would show up as a single storage from each Proxmox server interface but will be using disks from all 3...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.