pve-kernel-5.4.78-1-pve has the same issue as pve-kernel-5.4.73-1-pve
Last known good version for HP Gen9 servers is pve-kernel-5.4.65-1-pve
Maybe attached screenshot helps to find the cause?
I was very dissapointed today to find out after a week long struggle with Windows timezone offset. No matter what I did, Windows turned its time to Proxmox timezone, UTC in my case. I had to change my Proxmox host to my local timezone to force Windows not to change time to UTC.
Developers...
I eventually fixed it by creating new pools withing current best practice recommendations, migrated data and deleted old ones. It still involved more than 60 hours or data movement.
I don't know solution but all problematic PGs became on normal recovery track when I restarted second monitor. I also have too much PGs per OSD and maybe that was the reason. I set high enough mon_max_pg_per_osd to pass my current setup and on second monitor restart it all became on right track...
Unfortunately I have the very same issue. It was more or less fine untill backup time, then backup failed and 2 rbd devices got stuck in iowait. In a quest to fix this, now my cluster has a lot of activating+remapped PGs. Basicaly every OSD in my cluster now has some PGs in this state.
Well, regarding my problem. It turned out it was problem in combination with 3Par 8200, multipath and LVM. I disabled discards in LVM layer and problems stopped.
Anyway, I now have different issue:
Web interface stops after first restart and node is still red-crossed when viewed from proxmox...
Probably not related to proxmox but to kernel. LVM disk starts to flock when VM disk is tried to be deleted. Setup is Proxmox Blade and FCoE to 3Par 8200 utilizing 4 paths over FCoE.
Syslog:
/etc/multipath.conf
LVM data:
I once tried to setup like this 2 type roots and then specify in rules to put first data copy on SSD root and replica on HDD. It went quite well untill I decided to do maintenance on 3 node Promxox/CEPH cluster. It runed out that there was a percentage of data where primary and replica was on...
3 node cluster with CEPH storage cluster. All hypervisors have identical hardware. Latest release Proxmox Virtual Environment 4.2-5/7cf09667.
Cannot live migrate VMs. Tried migration from prox-01 to prox-02 and prox-03.
Migration terminates with error:
May 30 15:00:32 starting migration of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.