Search results

  1. ceph pool and max available size

    OK , but i comes up with a confusing question - I increased the pg_num and pgp_num to 1024 to the hdd_new ceph pool, data: pools: 1 pools, 1024 pgs objects: 200k objects, 802 GB but then when i create a new addition backup pool, mon_command failed - pg_num 1024 size 3 would mean...
  2. ceph pool and max available size

    Hello Alwin, Forgive me , but isnt it calculated as 256*4 = 1024 PGs?
  3. ceph pool and max available size

    Hello, I continue working with my ceph servers with proxmox, and i found a very big concerns of the problem regarding the max available: root@ceph1:~# ceph -s cluster: id: health: HEALTH_OK services: mon: 5 daemons, quorum ceph1,ceph2,ceph3,ceph4,hv4 mgr: ceph1(active)...
  4. Created an erasure code pool in ceph , but cannot work with it in proxmox

    Thanks for the update and let me know proxmox does not support it.
  5. Created an erasure code pool in ceph , but cannot work with it in proxmox

    Hello Udo Thanks for your information, while i aware this tier information during create the erasure pool, i did not know that it is a requirment. Now i created the tire pool, but i am not sure if this result is correct? Let say i moved a 50G image to this pool: But it looks like the 50G stays...
  6. Created an erasure code pool in ceph , but cannot work with it in proxmox

    Hello Created an erasure code pool in ceph , but cannot work with it in proxmox. I simply used RBD(PVE) to moint it. The pool can show up under proxmox correctly, with size as well, but cannot move disk to there: create full clone of drive virtio0 (hdd:vm-100-disk-1) error adding image to...
  7. Directly mount the ceph pool and backup the whole VM there?

    Hello, I am continue working with my ceph cluster under proxmox. I have a question, there is a backup function under the proxmox UI. I found that this version can works with "local" and "nfs (though a VM i created)", but not the RBD(*). Is that possible to backup the VM directly to the RBD ceph...
  8. Is there any different of RBD(PVE) and RBD externel under proxmox?

    Hello spirit, Thank you very much for the explaination. So will stick with rbd pve then.
  9. Is there any different of RBD(PVE) and RBD externel under proxmox?

    Hello, In my ceph cluster /etc/pve/storage.cfg, i found that: This works (show as:RBD externel) rbd: ceph-hdd content images,rootdir krbd 1 monhost 10.10.10.1;10.10.10.2;10.10.10.3;10.10.10.4 pool hdd username admin This also works (show as: RBD(PVE))...
  10. How can you maintain the cluster to continue to works after some nodes/datacenter down

    Dear all, Thanks, it looks like the number of the ceph monitor is the key of this problem. After i took one hv to install monitors on it , now it runs as intended: root@ceph1:~# ceph status cluster: id: health: HEALTH_WARN 1 datacenter (16 osds) down 16...
  11. How can you maintain the cluster to continue to works after some nodes/datacenter down

    Yes, i shunted down two ceph monitor server. VM freeze, ceph commands timed out. 4 monitors on the 4 ceph servers. The pvecm status result i posted on here was after this.
  12. How can you maintain the cluster to continue to works after some nodes/datacenter down

    I indeed have 7 nodes in the cluster, which some of the nodes acts as hyperviser . But i do only have 4 monitors, as i mentioned i have 4 nodes for ceph only. Even i shutdown two ceph servers, the quorum is still showing OK root@ceph1:~# ceph status 2018-06-29 20:03:56.490455 7fc872847700 0...
  13. How can you maintain the cluster to continue to works after some nodes/datacenter down

    Hello, Thanks for this information, i dont even know about this. Do you have/know any technical documents talking about this??
  14. How can you maintain the cluster to continue to works after some nodes/datacenter down

    Of crouse, if we turn back everythings on (for example just re-open the lan port of the ceph servers), now it is totaly fine... root@ceph1:~# ceph status cluster: id: health: HEALTH_OK services: mon: 4 daemons, quorum ceph1,ceph2,ceph3,ceph4 mgr: ceph4(active)...
  15. How can you maintain the cluster to continue to works after some nodes/datacenter down

    Hello, We are tring to pretend a disaster in our proxmox environments, so to see what we can do. We have 4 ceph nodes only run ceph, plus some addition nodes only acts as HV. We decleared 2 datacenter, 2 ceph server belongs to datacenter A, and the other 2 ceph servers belongs to datacenter B...
  16. can't add vlan tag x to interface bond0?

    Thanks, is there any diferent between the network setup of a cluster and standalone? Because last time i setup a cluster, seems declear the vlan on the HV node is ok, but this time is not. Anyway, it is ok now, thank you aderumier!
  17. can't add vlan tag x to interface bond0?

    Hello, Faced this problem when started VM RTNETLINK answers: File exists can't add vlan tag 2002 to interface bond0 kvm: -netdev type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown: network script /var/lib/qemu-server/pve-bridge...
  18. Host key verification failed and makes migration failed

    Hello How can we fixed this error from a cluster? It comes in loop and loop and loop 2018-03-26 22:48:24 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=HV1' root@10.20.1.15 /bin/true 2018-03-26 22:48:24 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2018-03-26...
  19. dell H700 raid card and osd does not appear in the UI as bluestore

    Just solved this problem by clear the metadata.... dd if=/dev/zero of=/dev/sdb bs=1024 count=1M pveceph createosd /dev/sdb It works normal now. This problem have solved.
  20. dell H700 raid card and osd does not appear in the UI as bluestore

    And if we now add back the invisible osd.0 by .... root@ceph1:~# ceph osd crush add osd.0 0 host=ceph1 add item id 0 name 'osd.0' weight 0 at location {host=ceph1} to crush map One thing for sure...It seems cannot detect the OSD as bluestore... How can we solve this? Thanks

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!