Search results

  1. A

    ext4 trim and Ceph

    Hello, So I got somehow misleaded by this - http://ceph.com/docs/master/rbd/qemu-rbd/ So adding discard=on to an IDE drive definition fixed it! Thanks! And about last one - discard for OSD? I don't know why would it be important in any way? I use xfs on spindle drive so discard seems...
  2. A

    ext4 trim and Ceph

    Hello, I have set up three node Ceph cluster to store VMs. It works very well. Altough I don't know whyfreeing spaces does not work. For example I create Linux VM with 500G disk. It only takes smallfraction of that space after installation. Then I create two big files let say 20G. With "ceph -w"...
  3. A

    [SOLVED] CEPH - rbd error: rbd: couldn't connect to cluster (500)

    So one thing was wrong, how I defined monlist (it should be spaces not commas). Also I did change the name to avoid some weird issues but it still does not help:dir: local path /var/lib/vz content images,iso,vztmpl,rootdir maxfiles 0rbd: shared monhost 10.10.10.1 10.10.10.2 10.10.10.3 pool rbd...
  4. A

    [SOLVED] CEPH - rbd error: rbd: couldn't connect to cluster (500)

    Hello,I did CEPH installation according to this - http://pve.proxmox.com/wiki/Ceph_ServerEverything went fine. Done all steps, CEPH seems to work:root@proxmox-A:/etc/pve/priv/ceph# pveceph lspoolsName size pg_num useddata 2...
  5. A

    Node is showing as red

    Thanks, but I am passing it trough switch that just transparently passes multicasts along (so treats them as broadcast). And nothning really changed. I wass diging and I found that (I have shared LVM VG over iSCSI): --- Logical volume --- LV Path /dev/shared/vm-100-disk-1...
  6. A

    Node is showing as red

    Hello, I have a strange issue. When I log to node A via HTTP, B shows in red (but I can browse summary, storage and so on). If I log to B via HTTP, a shows as red. Output from A: root@proxmox-A:~# pvecm status Version: 6.2.0 Config Version: 12 Cluster Name: KLASTER Cluster Id: 9492 Cluster...
  7. A

    Two node cluster - some clarification needed

    Hello, I would like to clarify some things I think I know about two-node HA cluster. 1) If I understand correctly I can use quorum disk as a replacement for third node but that disk still will be single point of failure? I mean if qdisk fails then whole cluster is doomed? 2) Qdisk could...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!