Search results

  1. H

    After last update today i get "too many PGs per OSD (256 > max 200)"

    After moving VMs to a different storage an delete the pool and creating a new one with pg_num=64 all is running fine and the HEALTH is OK ;-)
  2. H

    After last update today i get "too many PGs per OSD (256 > max 200)"

    Hello, by the way i'm not able to create a new pool with correct values. The message i get is: mon_command failed - pg_num 64 size 3 would mean 960 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)@çøU Am i wrong to do this?
  3. H

    After last update today i get "too many PGs per OSD (256 > max 200)"

    Should i create a new pool with the correct limit? Do i have to switch the VMs to a new created pool or can i correct it in the running pool?
  4. H

    After last update today i get "too many PGs per OSD (256 > max 200)"

    Hello all, after the latest update, done today in the morning, i get a HEALTH_WARN in Ceph -> "too many PGs per OSD (256 > max 200)" Before the update the HEALTH of Ceph was well. Are there any changes in CEPH-stuff, or have i done anything wrong in my setup?
  5. H

    No OSDs configurable on two nodes of a Ceph Cluster

    Hello again, i now have installed the two from 5.0 updated nodes new with 5.1 iso and made the setup for all the cluster-stuff and the ceph-stuff new. And now it runs all well. It seems that ceph won't run across a mixed setup of to 5.1 updated and already with 5.1 installed members. The...
  6. H

    No OSDs configurable on two nodes of a Ceph Cluster

    Here are the syslog files of all three hosts:
  7. H

    No OSDs configurable on two nodes of a Ceph Cluster

    # "pveversion -v" of host 1 (osd is coming up) proxmox-ve: 5.1-28 (running kernel: 4.13.8-2-pve) pve-manager: 5.1-36 (running version: 5.1-36/131401db) pve-kernel-4.13.4-1-pve: 4.13.4-26 pve-kernel-4.13.8-2-pve: 4.13.8-28 libpve-http-server-perl: 2.0-6 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3...
  8. H

    No OSDs configurable on two nodes of a Ceph Cluster

    ************************************************** ** "pveversion -v" of host 1 (osd is coming up) ** ************************************************** proxmox-ve: 5.1-28 (running kernel: 4.13.8-2-pve) pve-manager: 5.1-36 (running version: 5.1-36/131401db) pve-kernel-4.13.4-1-pve: 4.13.4-26...
  9. H

    No OSDs configurable on two nodes of a Ceph Cluster

    Hello all, i've installed a new 3 Host Cluster for creating a Ceph-based-HA-Cluster. Two of them i installed with the 5.0 and one with the 5.1 installation-source (iso). The two first installed hosts has been upgraded with subscription to 5.1. For your Information: I have installed Ceph on all...
  10. H

    All cluster-nodes are hanging after nightly backup

    It looks that this problem is the result of too less memory with the newer 4.4 version of PVE. After upgrading it on all nodes, the cluster is running normal. I'm now looking if this problem will appear again in the future.
  11. H

    All cluster-nodes are hanging after nightly backup

    I saw this message on one of the hosts console:
  12. H

    All cluster-nodes are hanging after nightly backup

    Hello all, last night our cluster of 7 nodes is hanging on the proxmox-admin-interface (on port 8006). At the time the Proxmox-cluster-management does not work the VMs on it are all runng fine. I had to restart the whole node via SSH-Console to get the node working again. We have the...
  13. H

    Integration of Mail Gateway with qmail/vmailmgr server

    The Problem is that the qmail-Server does not stop the delivery when the requested user doesn't exist. Is there a possibility to store a list of valid users on the PMX-MGW.
  14. H

    Integration of Mail Gateway with qmail/vmailmgr server

    Hello all, is in the Mail-Gateway a usable technic to request for valid users/emails on a Qmail/Vmailmgr Mailserver? In our network there are severall Qmail/Vmailmgr-Mailservers up and running for a great count of users/domains, so we cannot switch to another Sever-Type. Do someone have any...
  15. H

    Proxmox 4.4 and DRBD which Version is recommended?

    Hello all, i'm continuing confused about the possible setup. Isn't it possible to make a 2 nodes setup with really only 2 members? I only want a filesystem mirroring such as raid-1 (on 2 nodes), and a ha-cluster setup with only one failover-host as backup.
  16. H

    Proxmox 4.4 and DRBD which Version is recommended?

    Hello all, we use a two-node cluster and want to setup DRBD to mirror the image of the VMs to both nodes. I'm a little bit confused on which version (8 or 9) should i use, and how can i install it? Does anybody have any experience on setup, and can make a recommendation? Regards Hans-Peter
  17. H

    Setup with Intel 10 Gigabit X710-DA2 SFP+ Dual Port

    After all other tries failed, my last attempt to get the NICs working is to compile the newest driver from intel (1.6.24). When i start the make in the src directory i get the following error: ---------------------------- root@pmx-mm-01:/usr/src/i40e-1.6.42/src# make make[1]: Entering...
  18. H

    Setup with Intel 10 Gigabit X710-DA2 SFP+ Dual Port

    It looks that this cards doesn't work with the current Proxmox-Version and the used Supermicro Mainboards. Does anybody knows a functional 10G DualPort 10G SFP+ NIC that works well with Proxmox 4.4-xx?
  19. H

    Setup with Intel 10 Gigabit X710-DA2 SFP+ Dual Port

    Have you done a special setup of the card, and what type of switch do you have?
  20. H

    Setup with Intel 10 Gigabit X710-DA2 SFP+ Dual Port

    The link is showed with "state DOWN" in ip link. Why? Here is the output of "ip link": 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc...