boot errors

Ma907xb

Well-Known Member
Dec 26, 2018
71
1
48
USA
After installing PVE as RAID1 i get the below errors on boot. Are any of the the below errors a concern? Specifically the ones that mention 'no caching mode page found'?

no caching mode page found
assuming drive cache: write through





proxmox install 7.JPG
 
Proxmox VE in version 6? And when you say RAID1, do you mean with our installer on a ZFS RAID1, or a HW RAID1?

Are any of the the below errors a concern?

Hmm, IMO the ACPI ones are looking a bit suspicious. What CPU/Mainboard is used here? Maybe check the BIOS/UEFI settings for some power savement options, or the like (just a guess).

Specifically the ones that mention 'no caching mode page found'?

no caching mode page found
assuming drive cache: write through
Those two are not real errors, just normal kernel log to notice that the disk seems to have no cache, and thus write-through mode is selected to ensure writes get written directly through to the disk.
What disks do you use?
 
Proxmox VE in version 6? And when you say RAID1, do you mean with our installer on a ZFS RAID1, or a HW RAID1?

Yes version 6. I have disabled hardware raid and have each disc configured in AHCI mode. Yes - I mean with proxmox installer ZFS RAID1.



Hmm, IMO the ACPI ones are looking a bit suspicious. What CPU/Mainboard is used here? Maybe check the BIOS/UEFI settings for some power savement options, or the like (just a guess).

I will play around with my power settings see if this changes. I've read this is a standard error and can be ignored.

Those two are not real errors, just normal kernel log to notice that the disk seems to have no cache, and thus write-through mode is selected to ensure writes get written directly through to the disk.
What disks do you use?

Both drives are SATA. My intention is to configure the 3 servers with clustering+ceph. Will this configuration need to change? write through will work correctly for this installation?
 
Ah, OK, now I got it, your the one from: https://forum.proxmox.com/threads/correct-initial-installation-w-ceph.57463/#post-265670 :)

Both drives are SATA. My intention is to configure the 3 servers with clustering+ceph. Will this configuration need to change? write through will work correctly for this installation?

Then I mention the same as in my other reply: I somehow (no idea how) missed the Ceph part you mentioned in the other thread and just focused on your posted screenshot with ZFS. Yes, I'd scrape the thing a re-setup it without ZFS. As said in the other thread: a small SSD for the PVE itself could do wodner, you'd free up a full disk for ceph (ceph likes to manage OSD Disks completely on their own best). Here 128GB would be even enough, and 256 to 512 GB drives (SATA or M.2) cost about 60 to 80 €, so IMO well worth the investment.
 
Ah, OK, now I got it, your the one from: https://forum.proxmox.com/threads/correct-initial-installation-w-ceph.57463/#post-265670 :)



Then I mention the same as in my other reply: I somehow (no idea how) missed the Ceph part you mentioned in the other thread and just focused on your posted screenshot with ZFS. Yes, I'd scrape the thing a re-setup it without ZFS. As said in the other thread: a small SSD for the PVE itself could do wodner, you'd free up a full disk for ceph (ceph likes to manage OSD Disks completely on their own best). Here 128GB would be even enough, and 256 to 512 GB drives (SATA or M.2) cost about 60 to 80 €, so IMO well worth the investment.

Thank you. I will just respond here. My (future) ceph setup will be 3 nodes (clustered). I do have 3 spare SSD 256gb to install PVE onto.

I should select ext4 on the SSD, and then configure the other drives during the CEPH configuration? Wouldn't the SSD have an OSD as well?
 
I should select ext4 on the SSD, and then configure the other drives during the CEPH configuration?

Yes, exactly. You can create OSDs after installation and initial ceph setup easily through the Webinterface.

Wouldn't the SSD have an OSD as well?

Sorry, how do you mean that? yes, in theory you could add a OSD on parts of that SSD, but I really would not do that.
1. 256GB is rather small for an OSD, especially if parts of that need to be used for hosting Proxmox VE itself
2. It'd place the VM data and the PVE operating system on the same disk, slows things down and adds tight coupling.
 
Yes, exactly. You can create OSDs after installation and initial ceph setup easily through the Webinterface.

ok

Sorry, how do you mean that? yes, in theory you could add a OSD on parts of that SSD, but I really would not do that.

How is CEPH redundant without an OSD on the disk for the operating system and VM files? If OS drive fails other OSDs will rebuild it?

1. 256GB is rather small for an OSD, especially if parts of that need to be used for hosting Proxmox VE itself
2. It'd place the VM data and the PVE operating system on the same disk, slows things down and adds tight coupling.

I added the SSD into my server and reinstalled proxmox onto the SSD. I did the installation without reformatting my other SATA disks.

proxmox install 8.JPG

after my installation on all three servers I updated to enterprise and i see the SSD is under LVM. See below.

proxmox install 9.JPG

I then also had the 1tb ZFS already in the ZFS disk. Did proxmox recognize these two as ZFS? I will need to reformat both of these disks to proceed with my CEPH configuration?"

proxmox install 10.JPG
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!