Recent content by Instigater

  1. I

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    Your latest testkernel pve_5.4.78-1~b1_amd64 really fixes the issue! Thank you.
  2. I

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    How to lock grub to specific kernel version for now while issue is being fixed?
  3. I

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    pve-kernel-5.4.78-1-pve has the same issue as pve-kernel-5.4.73-1-pve Last known good version for HP Gen9 servers is pve-kernel-5.4.65-1-pve Maybe attached screenshot helps to find the cause?
  4. I

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    Updating firmware of all components in blade was the first I tried even before starting to search the internet.
  5. I

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    The same issue on HP BL460c Gen9! Masking the systemd-udev-settle.service doesn't help.
  6. I

    Clock on VM using Proxmox 5

    I was very dissapointed today to find out after a week long struggle with Windows timezone offset. No matter what I did, Windows turned its time to Proxmox timezone, UTC in my case. I had to change my Proxmox host to my local timezone to force Windows not to change time to UTC. Developers...
  7. I

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    I eventually fixed it by creating new pools withing current best practice recommendations, migrated data and deleted old ones. It still involved more than 60 hours or data movement.
  8. I

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    Just a small update. mon_max_pg_per_osd was taken by monitors but ceph status still barked about it until I restarted active mgr daemon.
  9. I

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    I don't know solution but all problematic PGs became on normal recovery track when I restarted second monitor. I also have too much PGs per OSD and maybe that was the reason. I set high enough mon_max_pg_per_osd to pass my current setup and on second monitor restart it all became on right track...
  10. I

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    Unfortunately I have the very same issue. It was more or less fine untill backup time, then backup failed and 2 rbd devices got stuck in iowait. In a quest to fix this, now my cluster has a lot of activating+remapped PGs. Basicaly every OSD in my cluster now has some PGs in this state.
  11. I

    Proxmox VE 5.0 beta2 released!

    Well, regarding my problem. It turned out it was problem in combination with 3Par 8200, multipath and LVM. I disabled discards in LVM layer and problems stopped. Anyway, I now have different issue: Web interface stops after first restart and node is still red-crossed when viewed from proxmox...
  12. I

    Proxmox VE 5.0 beta2 released!

    Probably not related to proxmox but to kernel. LVM disk starts to flock when VM disk is tried to be deleted. Setup is Proxmox Blade and FCoE to 3Par 8200 utilizing 4 paths over FCoE. Syslog: /etc/multipath.conf LVM data:
  13. I

    Flashcache vs Cache Tiering in Ceph

    I once tried to setup like this 2 type roots and then specify in rules to put first data copy on SSD root and replica on HDD. It went quite well untill I decided to do maintenance on 3 node Promxox/CEPH cluster. It runed out that there was a percentage of data where primary and replica was on...
  14. I

    Live migration error (segfaults)

    Here you are! root@prox-01:~# pveversion -v proxmox-ve: 4.2-51 (running kernel: 4.4.8-1-pve) pve-manager: 4.2-5 (running version: 4.2-5/7cf09667) pve-kernel-4.2.6-1-pve: 4.2.6-36 pve-kernel-4.4.8-1-pve: 4.4.8-51 pve-kernel-4.2.8-1-pve: 4.2.8-41 pve-kernel-4.2.2-1-pve: 4.2.2-16 lvm2...
  15. I

    Live migration error (segfaults)

    3 node cluster with CEPH storage cluster. All hypervisors have identical hardware. Latest release Proxmox Virtual Environment 4.2-5/7cf09667. Cannot live migrate VMs. Tried migration from prox-01 to prox-02 and prox-03. Migration terminates with error: May 30 15:00:32 starting migration of...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!