I tried Realtek (rtl9139), but it is a 100 Mb (Fast Ethernet) interface only, so It was why I turned to VirtIO. Following your answer, I tried again rtl9139, and the interface also went done after a few seconds. I also tried to add a VLAN on this interface, there was none, but it...
We also encounter the problem since a few months, perhaps since upgrade to 7.0, perhaps before. But until a few days, it was just random and just annoying, we had some windows interfaces that were going down, and just disable it and re-enable was sufficient for some days.
But we now encounter a...
No, I did not try, because I thought the problem arised from Debian and non Free drivers. So I thought it would not install under Debian too. But, indeed, you don't use the same kernel.
Now, the server is reinstalled under Rocky Linux (for another backup solution), and as I...
This thread is ratehr old, but today, I solved the problem, abd I thought it would be a good thing to let know how I did it.
The problem was not slved during all this time, but it was a warning, and seems harmless, and I let it as it was.
But today, I upgraded to Proxmox 6.4, and then...
I tried today to install Proxmox Backup server on a Dell PE R740. It was my first try at it. It stopped wit 'no network interface found'. The server has a QLogic FastLinQ 41264 with two SFP+ ports and two Ethernet 1 Gg ports. The problem has already been reported for such cards with...
Thanks for the tuto ! It worked perfectly here for two Dell PE R640 with BOSS controller.
It was two new servers installed with Proxmox 6.3. Notice that when I upgraded my other nodes from 5.x to 6.1, Dell OMSA ketp working, I had not to reinstall.
For the log error, coming back to hammer solves the issue :
ceph config set mon mon_crush_min_required_version hammer
As stated in this thread :
I just verified, and all OSDs are indeed in nautilus version. So, why the warning ?
root@prox2orsay:~# ceph tell osd.* version
"version": "ceph version 14.2.5 (3ce7517553bdd5195b68a6ffaf0bd7f3acad1647) nautilus (stable)"
"version": "ceph version 14.2.5...
For the commande min version firefly, I see a lot of messages in the logs saying :
"set_mon_vals failed to set mon_crush_min_required_version = firefly: Configuration option 'mon_crush_min_required_version' may not be modified at runtime"
And happy new year to all.
I just took advantage of the new year period, where there were few people in the lab, to finally upgrade my 5.4 clusters to 6.1. I tested the procedure on test clusters, and all went fine. I followed the guide to upgrade from 5.4 to 6.0. I have also ceph...
After reading other threads, I saw that the same problem, at least conflict witth zfs packages has been encountered by others.
Thomas Lamprecht said it wa a conflict between the last versions of zfsutils-linux, with insserv which support only insserv > 1.18, and the version installed by debian...
I have not yet updated my proxmox clusters to 5.3, because I was too busy, but I am in the process to do so, and as usual, I test the process on a test cluster. It is a three nodes cluster built using nested virtualization, so it is a cluster of three nodes built on a single physical...
OpenVZ needed a special kernel. So it is not sure it will get patches for this flaw. It is a bit like Xen, where no patches are yet available.
kvm and lxc are maintained inside the standard Linux kernel, so will benefit from vanilla kernel patches.
A check box in the interface, which would allow to disable/enable it for entire cluster, and see it is or not disabled, would be easier to manage. And you would see it. If it is a parameter in grub config file, it is easy to not notice it.
The application of this kernel security patch could result on noticeable performance impact, notably on servers. See for example this first bechmark from Phoronix :
So I wonder if this patch should be applied...
Finally, I solved the problem by rebooting the second node. I think it was in some stateof error, and as the cluster as only three nodes, hence quorum is two, and third node was not yet upgraded, there was no quorum.
It is much better now :
~# ceph -s
Some more information after reading another thread :
~# systemctl status ceph ceph-osd
● ceph.service - PVE activate Ceph OSD disks
Loaded: loaded (/etc/systemd/system/ceph.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2017-08-14 22:19:06 CEST; 18h ago
I am a little bit new to Ceph. I installed a few months ago a new proxmox cluster with Promox 5 beta using 3 Dell PE R630 nodes, each with 2 SSDs (one for OS and one for journals), and 8 500 GB HD drives, for OSDs. So I have 24 OSDs. Proxmox ans Ceph share the same servers.
For completion, I managd to deal with the warning 'no active mgr', by creating one with pveceph.
# pveceph createmgr
creating manager directory '/var/lib/ceph/mgr/ceph-prox-nest3'
creating keys for 'mgr.prox-nest3'
setting owner for directory
enabling service 'firstname.lastname@example.org'...
I ran into the same problem. I am testing the migration from PVE 4.4 with ceph jewel to promox 5.0 on a test cluster.
So I firts migrated from ceph jewel to luminous following the documention, then migrated from jessie to stretch.
I ended with this ceph package, as reported above :