yes, @Dunuin - we thought that we are not gonna get new updates,
we still don't want to jump to the latest version @fiona we wanted to but we will do it in our new hardware. that's why we wanted to make sure that when we get the update on this list there wont be any problem with our current...
we are still using proxmox 6.4 version and we are not ready to upgrade to the latest version and support ended for this version 2 months ago. is it safe to continue updating proxmox 6.4 even though support has ended 2 months ago? We are continuing to get update notices of new packages to this date.
Yes, I am sure that there is no modification happening, we've been using proxmox for almost 4 years now and only set it up as how the proxmox should be set up and we didn't remove any package. there are actually a few threads here that have had the same issue since 2020, this is also the main...
definitely VM's, I installed it myself. freebsd only runs on vm as well. I have VM that has ubuntu 20.04 on it and centos 7 also having the same issue. centos 7 is easier to fix just need to restore from backup(but not ideal if it contains data collected on a day-to-day activity), but for Ubuntu...
after doing the upgrade mention above, we are now experiencing random VM having the Boot failed problem. from FreeBSD 13 to Centos 7.
anyone, any idea or solution?
we proceed with installing additional ssd one at a time and now my additional question is:
Can we set the no rebalance and no backfill flag ,destroy the OSD and readd the OSD, unset the flags and have it rebuild.
to prevent multiple rebuilds each time.
Kernel Version
Linux 5.4.162-1-pve #1 SMP PVE 5.4.162-2 (Thu, 20 Jan 2022 16:38:53 +0100)
PVE Manager Version
pve-manager/6.4-13/9f411e79
we are currently using this version and recently upgrade to that version.
and we are still having this warning. all our proxmox server including...
we just upgrade to this version
PVE Manager Version pve-manager/6.4-13/9f411e79
Kernel Version Linux 5.4.162-1-pve #1 SMP PVE 5.4.162-2
and haven't decided to upgrade to 7.x version
we really don't want to do an update on the system due to the problem with some vm's getting an error with...
@aaron I have 15 OSD. 5 OSD on each SSD and I believe the redundancy is set as 3/2. Better to remove 1 OSD at a time or all 5 OSD that is on the SSD? To be more clear 3 ceph host, and 5 OSD on each host.
@michael.schaefers - any thoughts?
@t.lamprecht - thank you for replying back. if I may ask so that I won't need to create another post. is it possible to migrate vps and kvm from 6.4-11 to 7.1? won't there be any problem when a vps/kvm migrates to a new version of proxmox? we are thinking of upgrading one server at a time...
We are currently running proxmox 6.4-5 in our system with Ceph(nautilus), We haven't decided to upgrade to 7.1 yet. and will update soon to 6.4-11.
would there be a problem if we upgrade our ceph from nautilus to Octopus? while running 6.4-11
I was hoping that there is a solution on this rather than reinstalling it. We are about to do upgrade, but after knowing that there are use experiencing this. I don't think we don't want to do an update since we can't do reinstall of the node when shit like this happen. We are using the...
for my FreeBSD KVM since it's really important I just restore it from backup and that works for me, but not ideal. Since if this kvm has database on it I could possibly lose some data. but for my ubuntu 18.04 KVM restoring doesn't work and I really need it back online.
I just run an update on my proxmox and after doing the upgrade some of my KVM with FreeBSD and ubuntu 18.04 are now having a boot failed: not a bootable disk issue
Kernel Version
Linux 5.4.128-1-pve #1 SMP PVE 5.4.128-2
PVE Manager Version
pve-manager/6.4-13/9f411e79
And several of my...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.