Search results

  1. KVM Windows VMs and losing network connectivity

    Hi IanCH, I tried Realtek (rtl9139), but it is a 100 Mb (Fast Ethernet) interface only, so It was why I turned to VirtIO. Following your answer, I tried again rtl9139, and the interface also went done after a few seconds. I also tried to add a VLAN on this interface, there was none, but it...
  2. KVM Windows VMs and losing network connectivity

    We also encounter the problem since a few months, perhaps since upgrade to 7.0, perhaps before. But until a few days, it was just random and just annoying, we had some windows interfaces that were going down, and just disable it and re-enable was sufficient for some days. But we now encounter a...
  3. no network interface found with QLogic FastLinQ 41264

    Hi Dominic, No, I did not try, because I thought the problem arised from Debian and non Free drivers. So I thought it would not install under Debian too. But, indeed, you don't use the same kernel. Now, the server is reinstalled under Rocky Linux (for another backup solution), and as I...
  4. [SOLVED] Problem upgrading from ceph Luminous to Nautilus : crush map has legacy tunables (require firefly, min is hammer)

    Hi all, This thread is ratehr old, but today, I solved the problem, abd I thought it would be a good thing to let know how I did it. The problem was not slved during all this time, but it was a warning, and seems harmless, and I let it as it was. But today, I upgraded to Proxmox 6.4, and then...
  5. no network interface found with QLogic FastLinQ 41264

    Hello all, I tried today to install Proxmox Backup server on a Dell PE R740. It was my first try at it. It stopped wit 'no network interface found'. The server has a QLogic FastLinQ 41264 with two SFP+ ports and two Ethernet 1 Gg ports. The problem has already been reported for such cards with...
  6. [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Thanks for the tuto ! It worked perfectly here for two Dell PE R640 with BOSS controller. It was two new servers installed with Proxmox 6.3. Notice that when I upgraded my other nodes from 5.x to 6.1, Dell OMSA ketp working, I had not to reinstall.
  7. [SOLVED] Problem upgrading from ceph Luminous to Nautilus : crush map has legacy tunables (require firefly, min is hammer)

    For the log error, coming back to hammer solves the issue : ceph config set mon mon_crush_min_required_version hammer As stated in this thread : https://forum.proxmox.com/threads/upgrade-ceph-to-nautilus-14-2-2.57304/
  8. [SOLVED] Problem upgrading from ceph Luminous to Nautilus : crush map has legacy tunables (require firefly, min is hammer)

    I just verified, and all OSDs are indeed in nautilus version. So, why the warning ? root@prox2orsay:~# ceph tell osd.* version osd.0: { "version": "ceph version 14.2.5 (3ce7517553bdd5195b68a6ffaf0bd7f3acad1647) nautilus (stable)" } osd.1: { "version": "ceph version 14.2.5...
  9. [SOLVED] Problem upgrading from ceph Luminous to Nautilus : crush map has legacy tunables (require firefly, min is hammer)

    For the commande min version firefly, I see a lot of messages in the logs saying : "set_mon_vals failed to set mon_crush_min_required_version = firefly: Configuration option 'mon_crush_min_required_version' may not be modified at runtime"
  10. [SOLVED] Problem upgrading from ceph Luminous to Nautilus : crush map has legacy tunables (require firefly, min is hammer)

    Hi all, And happy new year to all. I just took advantage of the new year period, where there were few people in the lab, to finally upgrade my 5.4 clusters to 6.1. I tested the procedure on test clusters, and all went fine. I followed the guide to upgrade from 5.4 to 6.0. I have also ceph...
  11. [SOLVED] Dependency problem breaks upgrade from 5.2 to 5.3

    After reading other threads, I saw that the same problem, at least conflict witth zfs packages has been encountered by others. Thomas Lamprecht said it wa a conflict between the last versions of zfsutils-linux, with insserv which support only insserv > 1.18, and the version installed by debian...
  12. [SOLVED] Dependency problem breaks upgrade from 5.2 to 5.3

    Hi all, I have not yet updated my proxmox clusters to 5.3, because I was too busy, but I am in the process to do so, and as usual, I test the process on a test cluster. It is a three nodes cluster built using nested virtualization, so it is a cluster of three nodes built on a single physical...
  13. fuckwit/kaiser/kpti

    OpenVZ needed a special kernel. So it is not sure it will get patches for this flaw. It is a bit like Xen, where no patches are yet available. kvm and lxc are maintained inside the standard Linux kernel, so will benefit from vanilla kernel patches.
  14. fuckwit/kaiser/kpti

    A check box in the interface, which would allow to disable/enable it for entire cluster, and see it is or not disabled, would be easier to manage. And you would see it. If it is a parameter in grub config file, it is easy to not notice it.
  15. fuckwit/kaiser/kpti

    Hi all, The application of this kernel security patch could result on noticeable performance impact, notably on servers. See for example this first bechmark from Phoronix : https://www.phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=1 So I wonder if this patch should be applied...
  16. [solved] Ceph time out after upgrade from PVE 5 beta to 5.0

    Finally, I solved the problem by rebooting the second node. I think it was in some stateof error, and as the cluster as only three nodes, hence quorum is two, and third node was not yet upgraded, there was no quorum. It is much better now : ~# ceph -s cluster: id...
  17. [solved] Ceph time out after upgrade from PVE 5 beta to 5.0

    Some more information after reading another thread : ~# systemctl status ceph ceph-osd ● ceph.service - PVE activate Ceph OSD disks Loaded: loaded (/etc/systemd/system/ceph.service; enabled; vendor preset: enabled) Active: inactive (dead) since Mon 2017-08-14 22:19:06 CEST; 18h ago Main...
  18. [solved] Ceph time out after upgrade from PVE 5 beta to 5.0

    Hello all, I am a little bit new to Ceph. I installed a few months ago a new proxmox cluster with Promox 5 beta using 3 Dell PE R630 nodes, each with 2 SSDs (one for OS and one for journals), and 8 500 GB HD drives, for OSDs. So I have 24 OSDs. Proxmox ans Ceph share the same servers. I...
  19. Trouble with ceph on PVE 5

    For completion, I managd to deal with the warning 'no active mgr', by creating one with pveceph. # pveceph createmgr creating manager directory '/var/lib/ceph/mgr/ceph-prox-nest3' creating keys for 'mgr.prox-nest3' setting owner for directory enabling service 'ceph-mgr@prox-nest3.service'...
  20. Trouble with ceph on PVE 5

    I ran into the same problem. I am testing the migration from PVE 4.4 with ceph jewel to promox 5.0 on a test cluster. So I firts migrated from ceph jewel to luminous following the documention, then migrated from jessie to stretch. I ended with this ceph package, as reported above : #...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!