Search results

  1. M

    firewall packet drop at nf_conntrack

    i have like 18 Cluster server they all shared all firewall rules all ok expect one node once open the firewall all connection drops and in the logs Oct 4 05:31:29 xx kernel: [250319.678513] nf_conntrack: nf_conntrack: table full, dropping packet Oct 4 05:31:29 xx kernel: [250319.678799]...
  2. M

    Cloud-init bug

    I have used yum install cloud-init installed at Centos 7 and Debian 9 the issue really annoying. Thanks you anyway.
  3. M

    [SOLVED] How to install newer version of Cloud-init?

    Had the same issue. even bug to got root access instead of ssh key works without set it from putty but it come with the Centos 7 and Debian 9
  4. M

    Cloud-init bug

    Hello, It seems there is bug with Cloud-init root password. I understand that Cloud-init works with SSH Key access but either normal root access We setup template and active the root access however it never works it request to login by ssh key then set the root access by using "passwd " The...
  5. M

    Multiple Ceph pools issue

    After few hours of search i figuered something not sure if it really works. now how this look # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable...
  6. M

    Multiple Ceph pools issue

    I followed https://ceph.com/community/new-luminous-crush-device-classes/ I add the rules and seems fine but not sure why the Ceph start replicated the HD data to the NVMe as well # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries...
  7. M

    ceph raw 2tb max?

    Hello, Do have one KVM with 5tb ceph raw file with Cloud-init active the issue that system are on one partitions vda1 i can see it 2TB only root@xxx:~# df -H Filesystem Size Used Avail Use% Mounted on udev 511M 0 511M 0% /dev tmpfs 105M 5.1M 100M 5% /run...
  8. M

    Ceph question

    Hello, If i do use NVME for promox os in ceph without any raid anf if this drive failed i will loss everything on that node? Even the OSD and journal? Or if i got it replaced and reinstall fresh os and then join to cluster again all OSD will be there? And available? Side note : the NVME for...
  9. M

    is that best speed i can got?

    it seems quit strange that i got better performance with 2 x 6tb sata only while using Filestore instead of bluestore?! rados bench -p test 60 write --no-cleanup Total time run: 62.803659 Total writes made: 1696 Write size: 4194304 Object size: 4194304...
  10. M

    is that best speed i can got?

    I can add more drives it will gives better performance? Also not sure do i need add more journal or one P3700 will be enough each node?
  11. M

    is that best speed i can got?

    Well, not sure what is the issue i spent 4 days trying to figure it out. !
  12. M

    is that best speed i can got?

    i'm using Mellanox SX6025 Non-blocking Unmanaged 56Gb/s SDN Switch, not sure if this even will works if increases the MTU? i'm using mtu 65520 is there is anyway to increases it? if yes to how much?
  13. M

    is that best speed i can got?

    allocating VM = 0 nothing on the system What disks do you use exactly = Seagate 7200rpm 256MB Cache 6Gb/s
  14. M

    is that best speed i can got?

    Hello, i'm doing Ceph not sure if this the best speed with my configuration? using 3 OSD with 5TB enterprise HardDrive and NVM P3700 Bluestore Journal/DB Disk, My concern that need a lot of space with lot of speed as well so if i add more 5tb version will the speed up? or add more journal...
  15. M

    ceph very slow perfomance

    osd commit_latency(ms) apply_latency(ms) 8 65 65 7 74 74 6 52 52 3 0 0 5 214 214 0 70 70 1...
  16. M

    Proxmox VE Ceph Benchmark 2018/02

    Do some test Dual E5-2660 75 GB RAM SM863 OS Host Dual port Mellanox 56Gb/s 3 x OSD 5TB Hard drive Per server 9 total OSD 1 x P3700 Journal per node 3 total osd commit_latency(ms) apply_latency(ms) 8 65 65 7 74 74 6...
  17. M

    ceph very slow perfomance

    anyone can give me advice on this? and what best configuration i can go with? Thanks
  18. M

    ceph very slow perfomance

    I was able to set the Mellanox Dual port 54Gb/s port FDR card up however able to do without bonding root@c18:~# rados -p test bench 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects Object prefix...
  19. M

    ceph very slow perfomance

    I would use different subnet in eth4 or eth5 bot even both when i using bonding otherwise the network will not fully up the boding ips not ping between nodes. now i use one nic port without boding and here is the test. root@ceph4:~# rados bench -p test 60 seq hints = 1 sec Cur ops started...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!