Search results

  1. M

    firewall packet drop at nf_conntrack

    i have like 18 Cluster server they all shared all firewall rules all ok expect one node once open the firewall all connection drops and in the logs Oct 4 05:31:29 xx kernel: [250319.678513] nf_conntrack: nf_conntrack: table full, dropping packet Oct 4 05:31:29 xx kernel: [250319.678799]...
  2. M

    Cloud-init bug

    Hello, It seems there is bug with Cloud-init root password. I understand that Cloud-init works with SSH Key access but either normal root access We setup template and active the root access however it never works it request to login by ssh key then set the root access by using "passwd " The...
  3. M

    Multiple Ceph pools issue

    I followed https://ceph.com/community/new-luminous-crush-device-classes/ I add the rules and seems fine but not sure why the Ceph start replicated the HD data to the NVMe as well # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries...
  4. M

    ceph raw 2tb max?

    Hello, Do have one KVM with 5tb ceph raw file with Cloud-init active the issue that system are on one partitions vda1 i can see it 2TB only root@xxx:~# df -H Filesystem Size Used Avail Use% Mounted on udev 511M 0 511M 0% /dev tmpfs 105M 5.1M 100M 5% /run...
  5. M

    Ceph question

    Hello, If i do use NVME for promox os in ceph without any raid anf if this drive failed i will loss everything on that node? Even the OSD and journal? Or if i got it replaced and reinstall fresh os and then join to cluster again all OSD will be there? And available? Side note : the NVME for...
  6. M

    is that best speed i can got?

    Hello, i'm doing Ceph not sure if this the best speed with my configuration? using 3 OSD with 5TB enterprise HardDrive and NVM P3700 Bluestore Journal/DB Disk, My concern that need a lot of space with lot of speed as well so if i add more 5tb version will the speed up? or add more journal...
  7. M

    Mellanox bonding issue

    Hello, I have 3 Ceph server trying to bonding via Mellanox 05:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3] Subsystem: Mellanox Technologies MT27500 Family [ConnectX-3] after boding the private IPS there is no connection between nodes. auto ib0 iface ib0...
  8. M

    P3700 vs 900P

    Hello, i'm confused between intel P3700 vs intel 900p for journal . i'm currently have 1 x 900P and 1 x p3700 however the third server not decided yet. i'm understand that 900p is consumer but it has quit good lifespan 10 Drive Writes per Day (DWPD) and even faster than P3700 which it has 43.8...
  9. M

    Flash Accelerator and SSD journal

    Hello, is that wise to use Flash Accelerator as journal? it give impressive result with 4k block read around 200+ i'm not sure how ceph works but they using 4k block?
  10. M

    ceph very slow perfomance

    Hello, I just built ceph with 3 node and 3 x 5tb 256 cache hard drive + SSD as Journal Dual port NIC 10 GB port Juniper switch 10GB port bond0 are 2 x 10 GB Intel T copper card Mod Balance-tlb i test the ceph very slow not sure why root@ceph2:~# rados -p test bench 10 write --no-cleanup...
  11. M

    [SOLVED] Ceph OSD issue

    Hello, I used 3 nodes of Ceph with 3 x 5TB drive and 2 SSD as Journal however once i did creat the OSD they are not visible in the GUI and shows down in putty i checked with google but can't find any solution root@xxx:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1...
  12. M

    mix storage with ceph

    Hello, Is that possible to do mix Local Storage Promox with Ceph Promox in same Cluster? or there is will be issues? Thanks
  13. M

    Ceph questions?

    Hello, Just have some questions regard Ceph. 1 - 10Gb switch Layer 3 or 10Gb switch Layer 2 will works? 2 - I have Dual 10 Gbit ports possible to use both private Ceph cluster? for better performance? Thanks
  14. M

    ceph question

    Hello, Planing to setup small ceph but i do care much for i/o speed my setup will be following 3 x U2 72GB RAM SSD SM863 for Proxmox installation 3 or 4 x 4tb enterprise drives. planned to add up to 5 of them later 2 x 120GB MyDigitalSSD BPX 80mm (2280) M.2 PCI as journal Dual 10 Gbit Intel...
  15. M

    New upgrade issue

    Hello, i upgrade the Cluster lately however this issue keep happened each few hours i attached screenshot. the connection loss but i still see the summary to fix this i have to restart the corosync proxmox-ve: 5.0-25 (running kernel: 4.10.17-4-pve) pve-manager: 5.0-34 (running version...
  16. M

    Ceph

    Hello, i'm looking to use Proxmox Ceph my servers configuration the following. 2U 8 x bays 240 SSD 7 x 4TB enterprise drives 1 x PCI SSD as journal The main issue is that motherboard have 8x SATA2 and 2x SATA3 ports , So i can't use all drives in SATA 3 port the tech guy telling me that...
  17. M

    Proxmox 5 reboot bug?

    Hi, It seems there is bug or it me only? any KVM reset from Promox 5 or reboot from ssh it goes shutdown and not boot back as it normal reboot commanded. Thanks
  18. M

    High Availability with NFS storage or Ceph with HA ?

    Hi, i'm looking to do Storage VM and asking to built 3 or 4 proxmox nodes ( will be up to 17 nodes in future) and looking to got as last 40tb space so my question is what is better choice for me? i have 10G switch and a lot of 4tb SAS & SATA enterprise drive and some of SSD drives. and around...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!