Search results

  1. L

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    The gpg error is usually an ipv6 problem.
  2. L

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    I just did a new install on a R740 and a R430 with proxmox 6.2-1 iso and OMSA 930. Works perfectly. I also have 930 on R730, NF500 and PE2950 following this tutorial. For the PE2950 i have the problem that the CMOS battery is not found ... who cares ? :p Sorry i don't have R710 to test.
  3. L

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Thanks a lot for you feedback. I'll update the tutorial to add ncurses
  4. L

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    You're welcome. I received so much from the community. It was my time to share ;)
  5. L

    Upgrading proxmox cluster from 5.6 to 6

    my 2 cents after upgrading my cluster from 5.4 to 6, do update fast. Node by node i followed the tutorial for proxmox upgrade as well as ceph. When i was in a mixed versions, corosync was losing synchronization. I had to stop corosync and restart it in foreground "corosync -f" on all node until...
  6. L

    Storage Idea

    He explained that he can have multiple 1U or 2 old R720XDs. With that he can make an HA setup by spreading the disk between the 2 boxes enable ceph and make a proxmox cluster in a 2 node setup.
  7. L

    Storage Idea

    You should use ceph rbd with a target of 2 for the rbd pool. Each proxmox node of your setup would be hypervisor AND storage.
  8. L

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    I've finally succeed in installing OMSA on proxmox 6.x and in return for all the information i've found thanks to the community, i wanted to share my findings. Here we go. Be sure to be logged as root at all time : sudo su First, be sure to remove omsa from Proxmox 5.4 before upgrading. apt...
  9. L

    pmxcfs segfaults

    My 2 cents : [Wed Feb 13 05:24:44 2019] perf: interrupt took too long (4931 > 4920), lowering kernel.perf_event_max_sample_rate to 40500 [Wed Apr 10 16:24:53 2019] cfs_loop[6168]: segfault at 7f3bad915000 ip 00007f3bad08378a sp 00007f3ba4c323a8 error 4 in libc-2.24.so[7f3bad000000+195000] [Sun...
  10. L

    Bonding network + iDRAC fencing device

    If the iDRAC has its own port, you can bond, else you can't. Lokytech (Proxmox on PowerEdge 2950/R610/R730)
  11. L

    ceph.com down?

    +1 for eu.ceph.com edit the pveceph script to change ceph.com to eu.ceph.com if you're installing a new cluster.
  12. L

    fenced (and rgmanager) not * always * automatically started after boot

    Hi, Check on each node : tail -f /var/log/cluster/*.log You should have one node trying to fence another one and your fencing does not work correctly. Correct your cluster.conf and everything should restart correctly.
  13. L

    [SOLVED] Proxmox VE - IPv6 Problems

    Re: Proxmox VE - IPv6 Problems A HUGE Thank you, i have successfully configured ipv6 on my Proxmox installation. The missing line : "iface lo inet6 loopback".
  14. L

    Error kvm : cpu0 unhandled wrmsr & unhandled rdmsr

    Hello everyone, I think Warod is right. I've encountered huge ploblems, is BroadCom BCM 5709 on ESXi as well. It don't support MTU 9000 with iSCSI offload. And it isn't a driver problem, but a firmware one ... We don't have the budget to switch to intel network card so MTU 1500 and VmWare...
  15. L

    Warning on Proxmox Dashboard

    Nope, or very complex. How the hypervisor could know what kind of partition the disk has ? (NTFS, ext3, ext4, XFS, MooseFS, ...) It simply can't. But if you use a monitoring server (like Nagios/Centreon, Ichiga, ...) to monitor all your physical and virtual machine, yes. But not on Proxmox...
  16. L

    Dell OpenManage Server Admin on PVE host

    Nope, you've got it wrong. To have ipmi working, you have to install Dell OMSA. Install it with :
  17. L

    Dell OpenManage Server Admin on PVE host

    I monitor all other hardware with IPMI through NRPE. You have to do some tuning with sudoers AND/OR "chmod 664 /dev/ipmi0". I joined the scripts for NRPE to check PS,FAN,RAID with IPMI and RAID with MegaCli64. You can also find another plugin to check openmanage with SNMP if you have configured...
  18. L

    Error kvm : cpu0 unhandled wrmsr & unhandled rdmsr

    I got this issue too. I have joined the dmesg output of my 3 servers. virt1 = Dell PowerEdge 2950 / Uptime:112 days virt2 = Dell PowerEdge 2950 / Uptime:112 days virt3 = Dell PowerVault NF500 / Uptime:43 days All vms : Disks: RAW - VirtIO/IDE on NFS/LOCAL. Network : Debian:VirtIO -...
  19. L

    Proxmox + Supermicro X8DT3-F = no hard disc found

    Like tom said, i think you should go in the Raid card Bios. At the boot of your server, you should see the name of your card and a text with "Press Ctrl + M to enter configuration". Or something like that with whatever is the key combinaison. Press those key and there, define 1 or more Volume...