Recent content by Kevin Myers

  1. K

    Any Proxmox Ceph Users Interested in helping test Benji

    and what are people doing for database / atomic backups inside the vm’s
  2. K

    Interesting Realtek Speed Issue with PVE5

    might also be some bad auto negotiation on your switch, try in the switch hard setting the port / duplex speed tp match the nic
  3. K

    New PVE Server to SSD or Not to

    nowdays i prefer ssds for system drives, and overprovision them to 50%, spinning disks also inevitably fail, while a ssd at 50% overprovisioned for an os drive will outlive the server most the time
  4. K

    MySQL performance : Proxmox over zfs + Raid 0

    or install proxmox 6 on a node using ext4 and lvm, and migrate the vms over .....
  5. K

    Flashcache vs Cache Tiering in Ceph

    check https://docs.ceph.com/docs/mimic/rados/operations/cache-tiering/
  6. K

    upgarding 2 nodes to corosync3

    Hi Fabian, The patches did fix the MTU issue, however another ugly issue has raised its head . I have a cluster of around 30 nodes, ( I know all the warnings of corosync2 and 16 nodes however this worked awesomely on corrosync2) . I started to move nodes from CS2 to CS3 in prep of the 5-6...
  7. K

    upgarding 2 nodes to corosync3

    Hi Fabian, It seems a patch have been applied to fix this, could it be integrated :) Best regards Kev
  8. K

    upgarding 2 nodes to corosync3

    I had tried earlier fiddling with netmtu, but decided to try setting some different values, (1500, 20000 , 65520) , all of which make nod difference we end up with Jul 29 14:40:14 kvm1 corosync[4039025]: [KNET ] pmtud: Aborting PMTUD process: Too many attempts. MTU might have changed during...
  9. K

    upgarding 2 nodes to corosync3

    Hi, Im having a peculiar issue. My nodes used infiniband in connected mode for cluster network, and storage (different interfaces) . The cluster network runs on a 10.1,100.x range, and the ipoib device looks like ib1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 65520 inet...
  10. K

    vm disk throttling

    i'll rephrase that.. I see in qemu if you have defined disk throttles, and include burstable limits, the burst limit is reduced to the normal limit after iops_max_length , ,bps_max_length, etc number of seconds. Is their any plans to expose these in the gui ?
  11. K

    vm disk throttling

    I see that with qemu the disk throttling for iops kicks in after iops_max_length number of seconds, would it be possible to create a parameter under the 'disk throttle' gui to control this ?
  12. K

    proxmox 5 ceph osd create

    answering my own question for any lost sole's that see the same problem, if you have upgraded from jewel to proxmox 5 stable (with ceph luminious) , then ensure you have a ceph mngr installed and working.
  13. K

    Cluster Nodes lost

    check corosync service is running on all nodes
  14. K

    proxmox 5 ceph osd create

    hard adding --filestore to API2/Ceph.pm eg my $cmd = ['ceph-disk', 'prepare', '--zap-disk', '--filestore' , '--fs-type', $fstype, does create the the osd with the filestore type. still have the other issue where new osd's added backfill, but the reported capacity the...
  15. K

    proxmox 5 ceph osd create

    I 'guess' its caused by this : http://ceph.com/releases/v12-1-0-luminous-rc-released/ " Notable Changes bluestore: ceph-disk: add –filestore argument, default to –bluestore (pr#15437, Loic Dachary, Sage Weil)" so --filestore now needs to be passed else bluestore will be created

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!