Search results

  1. Migration failed due to

    Problem solved. You have to copy the public key from /etc/ssh/ssh_host_rsa.pub to the /etc/ssh/ssh_known_hosts file of the other nodes.
  2. Migration failed due to

    I know what the problem is, I am looking for a solution ;-) The correct id_rsa.pub is within the authorized keys of the other nodes. SSH-ing to the machine does not throw this error. So I am confused.
  3. Migration failed due to

    When trying to migrate a VM to another node I get the error, that migration fail because of a RSA mismatch. I can ssh from the original node to the destination node without any problems. Any idea, how to fix the problem? 2017-12-14 18:33:02 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o...
  4. Renew SSL certificate after ip change of the GUI

    Dear colleagues, I ran into a problem after I changed the IP of the GUI interface. The Web UI was still reachable, but when opening the VNC console, I got an a "failed to connect to server" error. I moved the files: /etc/pve/pve-root-ca.pem /etc/pve/priv/pve-root-ca.key somewhere else and did...
  5. Ceph Cluster Fencing

    I found out, that I had a wrong understanding of how the ipmi watchdog is used. Basically it needs a driver to talk to a piece of hardware within the baseboard management controller (BMC). There is no communication via LAN, but direct access to the IPMI/BMC hardware, that gets polled and...
  6. Hardware watchdog (ipmi_watchdog) on Proxmox 5

    Problem solved. After the latest Proxmox update the ipmi_watchdog driver could be loaded and the *.ko file is available. I tried to implement the hardware watchdog on an old HP DL160 Gen6 machine. I tried ipmi_watchdog as well as hpwdt module, but the output of the ipmitool says: ipmitool mc...
  7. Hardware watchdog (ipmi_watchdog) on Proxmox 5

    Dear colleagues, I moved to Proxmox 5 in a dev environment and was wondering how to setup the hardware watchdog. On the same hardware running Proxmox 4, a kernel module ipmi_watchdog has been loaded. Now I can only find the following modules. lsmod |grep ipmi ipmi_ssif 24576 0...
  8. 10Gbit driver (ixgbe) with NAPI support

    Problem solved: vi /src/kcompat.h Rename napi_consume_skb by ___napi_consume_skb make
  9. 10Gbit driver (ixgbe) with NAPI support

    I was wondering wether someone managed to build ixgbe drivers for the current Proxmox 4.4. When I try to build the driver, I get an error: make[1]: Entering directory '/usr/src/linux-headers-4.4.62-1-pve' CC [M] /root/ixgbe/ixgbe-5.1.3/src/ixgbe_main.o In file included from...
  10. Message too long, mtu=1500 on OVSInt Port

    I just ran into trouble with enabling multicast on the OVSIntPorts. My cluster network uses 2 Intel 10G ports bonded together, 1 Bridge, 2 IntPorts. On the switch side I added a trunk and enabled jumboframes. After setting (according to the wiki) MTU to 8996 ceph cluster stops working, while...
  11. ceph OSD adding huge problem

    Ok, I see. Thanks for this info. Good to know, that recent fw is not available any more.
  12. ceph OSD adding huge problem

    Active backup is sufficient if you run on 40G. Guess I´ll try IB on a home lab. Prices for the hardware seem to be worth it. Thanks for the info. Are there sill firmware images out there or are you stuck on the fw the devices come with?
  13. ceph OSD adding huge problem

    Perfect, thanks a lot. Do the 4036 basically support bonding 2 links to achieve redundancy if you have 2 switches?
  14. ceph OSD adding huge problem

    Ok I see. What IB switch (modell) do you use?
  15. ceph OSD adding huge problem

    May I ask, what Infiniband hardware you exactly have in use and if you have a redundant network (bonding/stacking). I am thinking about using IB as well.
  16. How can I compile kernel modules in pve 4.2.2.1 enterprise?

    I have build a driver for the ioDrive2 for the current Proxmox Kernel. If you someone needs it: https://forum.proxmox.com/threads/fusion-iodrive2-support.24113/#post-166611
  17. Hardware/Concept for Ceph Cluster

    No, not exactly. Test 1: 4 Nodes (4x3OSD) Journal on disk. Same result with 3 Nodes Journal on disk + 1 Node with 1xP3600. Test 2: 4 Nodes (4x3OSD) Journal on ioDrive2 (one ioDrive per Node) It is not surprising that the there is no difference when using ext. journal only on one node. The...
  18. Hardware/Concept for Ceph Cluster

    I borrowed some ioDrives (ioDrive2 1,2T) from a colleague and had the chance to test them in my setup. Moved a 10G journal from the MX300 to the ioDrives and did the basic rados benchmark. The rest of the setup is unchanged (still only 1x 10G). rados -p ceph-ssd bench 10 write --no-cleanup...
  19. Fusion IODrive2 support

    I´ve talked to the tech support of SanDisk and asked for a driver. They told me, that they will not support Promox and do not know when/if they will support 4.4 Kernels. So I spent some time and used a mix of Ubuntu and Debian sources to compile the driver for the current Proxmox kernel...
  20. Hardware/Concept for Ceph Cluster

    Thank you for sharing your experience. Your right, I will have to do more realistic benchmarking. BTW: I have been using the benchmark proposals of Sebastien Han (the link you provided) I thought about leaving the journal on the MX300 before. I did the benchmarks with and without the NVMe drives...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!