Search results

  1. S

    PROXMOX and Windows ROK license for DELL

    Hi Wolfgang, thank you for your answer. Maybe you have never dealt with ROK licenses, they work on bare metal but not with virtualized. To make it to work, in vmware there is the Smbios.reflecthost flag in the vmx file, and Hyper-v has a solution too. Proxmox also has the solution, which is to...
  2. S

    PROXMOX and Windows ROK license for DELL

    I have a new DELL server, and installed PROXMOX without a problem. I'm now installing W2016 ROK, but it hangs in ROK license check, that is the check that it's real DELL hardware. I already dealt with this problem with HP hardware, and resolved using SMBIOS parameters. With Dell I'm not able to...
  3. S

    ram usage with bluestore

    this is my test cluster: node A: n.3 filesystem 1TB OSDs node B: n.2 filesystem 1TB OSDs, n.1 bluestore 1TB OSDs, node C: n.6 bluestore 300GB OSDs I noticed that bluestore OSDs take 3.5GB of RAM each, while the fileystem ones take 0.7 GB each. Following this thread, I added this to ceph.conf...
  4. S

    New 3-nodes cluster suggestion

    Thank you for the heads up on this problem. I didn't find any documentation on this, if you have some link it's appreciated. I think that writing the deployment with three nodes is "irresponsibile", anyway, seems a bit strong to me: the normal situation is with three nodes, and thus the pg...
  5. S

    New 3-nodes cluster suggestion

    alexskysilk, it's not what I've read and experienced. My lab consisted of two ceph nodes, with one copy each. The pool was 2/1. When I did shut down one node, vm were migrated (if in HA) and everything was working. Recovering was then a pain in the ass (old hardware, only 2 1gb lans, 3 7200rpm...
  6. S

    New 3-nodes cluster suggestion

    I did not select the SATADOM for OS, because I read this: https://www.supermicro.com/datasheet/datasheet_SuperDOM.pdf (see Use Cases not recommended) I don't use a switch for Ceph and corosync traffic in 10GB, I have it mesh, every connection with two bonded cables for higher reliability. That...
  7. S

    New 3-nodes cluster suggestion

    Hi, I still haven't bought it, I just got the ok from the customer and I think we'll have it in a couple weeks. I haven't read the need to ask for the LSI to be flashed in IT mode/passthrough, as it's not a real raid card. Have you got any link to that? Please let's both update this post, I...
  8. S

    New 3-nodes cluster suggestion

    The OSD are 460GB each, for a total of less than 3TB. Usually the suggestion is 1GB for 1TB of data. (http://docs.ceph.com/docs/jewel/start/hardware-recommendations/) Anyway I'll evaluate to increase RAM to 96GB, thanks!
  9. S

    New 3-nodes cluster suggestion

    It will be 6 VMs to start, 4 Windows and 2 Linux. No more than 40GB used by VMs. Your recommendation to increase RAM is valid, but it's easily upgradeable. My main concern is on the SATA SSD, if anyone has used these for Ceph.
  10. S

    New 3-nodes cluster suggestion

    I'm about to build a new, small and general purpose cluster. The selected hardware is this: SuperMicro TwinPro (2029TP-HC0R), with 3 nodes, each with: 1 CPU XEON SCALABLE (P4X-SKL3106-SR3GL) 64GB RAM DDR4-2666 (MEM-DR432L-CL01-ER26) 4 port 10GB (AOC-MTG-I4TM-O SIOM) FOR CEPH TRAFFIC (MESH) 4...
  11. S

    Proxmox VE - Support Lifecycle

    It does, if you want a supported system. And most customers want it and pay for it.
  12. S

    network problem on win vm

    Sorry for my late reply. It was a bonding issue, I did setup it in RR mode. Once it has been changed to ALB mode, everything worked perfectly.
  13. S

    network problem on win vm

    I will try that. I did read that VirtIO network drivers are the best for Linux VM but not windows.
  14. S

    network problem on win vm

    I have a cluster with proxmox 4.4. Three nodes, two IBM x3400 and a small PC. The two IBM host ceph data, the third is only monitor. I added another server, HP DL380 G7, installed Proxmox 5.2 on it and joined to the cluster (still not ceph). I will later upgrade the other servers. I have a...
  15. S

    Ceph, RAID cards and Hot swap

    Hi all, I have a doubt about RAID controllers and Ceph. I know that I must not put ceph osd disks under raid, as such I would not need a RAID controller. But the controller allows to hotswap disks, so I DO need it. Is it right?
  16. S

    new ceph pool

    Ok, understood. I added the keys, and now I can use the new pool. I thought the keys were per node, not per storage definition. Thank you!
  17. S

    new ceph pool

    Ok, I thought the replicas would have be done with the available OSDs, not the available nodes. I'll take a look if that's possible working on the ruleset. Anyway, as a test, I created another pool with 2-1 policy, and it doesn't give "degraded" error. But it gives no available space: I...
  18. S

    new ceph pool

    ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 4.74995 root default -3 2.69997 host bambino 1 0.90997 osd.1 up 1.00000 1.00000 2 0.89000 osd.2 up 1.00000 1.00000 5 0.89999 osd.5 up 1.00000...
  19. S

    new ceph pool

    I have only one ruleset: rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } When I create the pool, I select ruleset 0. Immediatly, I get the HEALTH_WARN and this log: 64 active+undersized+degraded