Search results

  1. K

    Ceph network recommendations

    The recommendation for CEPH is to have CEPH backend traffic on a separate network, also corosync traffic should be ideally on a separate network LACP Bonding over 4 ports would probably not have the expected effect, as the bandwith between two dedicated nodes would not exceed 10G. So I would...
  2. K

    RAID 1 and CEPH

    One more note: For Ceph to work correctly you need a cluster of at least "3" nodes!
  3. K

    New to pve, trying to delete ceph pool

    For CEPH you need at least 3 nodes ! CEPH on single node is a bad idea even for tests
  4. K

    Migrating VM via private IP instead of public IP

    Look into Datacenter -> Options -> Migration Settings,
  5. K

    [SOLVED] Problem after upgrade to ceph octopus

    Are these OSD's which you created before Nautilus with ceph-disk? did follow this step in the upgrade to Nautilus: ceph-volume siple scan /dev/... and the activate it?
  6. K

    ZFS Cluster Sharing Issue

    Please note that you will need at least 3 nodes for CEPH !
  7. K

    OVS Bridge for three nodes

    So you have built a full mesh? Server 1 port 1 to Server 2 port2 Server 2 port1 to Server 3 Port2 Server 3 Port 1 back to Server 1 Port2 or similar If the bridge would send out on both ports you would have a loop blowing up traffic completely. What the OVS Bridge does (and some switch can do...
  8. K

    Problem with ceph cluster without quorum

    And by the way size=2 is a bad idea ! Use at least size=3 and min_size=2 !
  9. K

    Backup Ideas

    did you have a look into Proxmox Backup Server? https://www.proxmox.com/de/proxmox-backup-server
  10. K

    [SOLVED] Ceph - slow Recovery/ Rebalance on fast sas ssd

    It is intentional, that CEPH does not fill up all available bandwith during recovery/rebalancing. If you want to speed it up: You can also set these values if you want a quick recovery for your cluster, helping OSDs to perform recovery faster. osd max backfills: This is the maximum number of...
  11. K

    RAID wird nicht erkannt

    Aber vorher das RAID im BIOS auflösen ! Sonst kracht es an unerwarteten Stellen. Das ist wahrscheinlich ein Intel S-ata mit dem LSI Fake Raid code? Da kann ich nur sagen Finger weg davon und die Platten einzeln durchreichen und dann ZFS darauf laufen lassen.
  12. K

    [SOLVED] Ceph offline, interface says 500 timeout

    Did you check free space on /var ? The log stopping in the middle of a line could be sign of a full /var
  13. K

    HP DL360 G9 with internal SD

    you can usually put S-ATA SSD in a SAS Shelf as long as you do not need the second channel (you do not have a second head). Of course you could also use SAS SSD's . Again -> keep fingers from SD Card's in a production server they are good for camera equipments but not for servers. We see also...
  14. K

    HP DL360 G9 with internal SD

    I would strongly advise against using SD-Card for system disks. Reason: SD-Cards do not like heavy writing, they will die very early. They are just usable for setups with rare writes. At least you need to put logfiles on another medium. So why not using an adequate S-ATA SSD ? They do not cost...
  15. K

    Localstorage to ceph migration

    yep, best is to have even two corosync rings
  16. K

    Localstorage to ceph migration

    1 -> Build corosync Cluster Network 2 -> Install ceph on all Nodes! 3 -> create Mon's (I would recommend them on all 3 nodes) 4 -> Create OSD's on node 2 + 3 (you get degraded PG's as long as node 1 has no OSD's, but thats ok for migration time) 5 -> Create Pool for Images 6 -> Migrate VM...
  17. K

    Recover zpool ( insufficient replicas / corrupted data )

    The missing indentation for sdd and sdc is very obscure!
  18. K

    Recover zpool ( insufficient replicas / corrupted data )

    This sounds like sdd and sdc were not added as a mirror device, instead as single devices - really a bad setup So there is no chance, the data is dead meat at is spread over all vdev's
  19. K

    Cannot create CEPH Pool with min_size=1 or min_size > 2

    every Documentation of CEPH tells you cleanly why min_size 1 is a very bad idea, keep your fingers from it. min_size 2 and size 3 is fine!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!