Search results

  1. K

    Any Proxmox Ceph Users Interested in helping test Benji

    and what are people doing for database / atomic backups inside the vm’s
  2. K

    Interesting Realtek Speed Issue with PVE5

    might also be some bad auto negotiation on your switch, try in the switch hard setting the port / duplex speed tp match the nic
  3. K

    New PVE Server to SSD or Not to

    nowdays i prefer ssds for system drives, and overprovision them to 50%, spinning disks also inevitably fail, while a ssd at 50% overprovisioned for an os drive will outlive the server most the time
  4. K

    MySQL performance : Proxmox over zfs + Raid 0

    or install proxmox 6 on a node using ext4 and lvm, and migrate the vms over .....
  5. K

    Flashcache vs Cache Tiering in Ceph

    check https://docs.ceph.com/docs/mimic/rados/operations/cache-tiering/
  6. K

    upgarding 2 nodes to corosync3

    Hi Fabian, The patches did fix the MTU issue, however another ugly issue has raised its head . I have a cluster of around 30 nodes, ( I know all the warnings of corosync2 and 16 nodes however this worked awesomely on corrosync2) . I started to move nodes from CS2 to CS3 in prep of the 5-6...
  7. K

    upgarding 2 nodes to corosync3

    Hi Fabian, It seems a patch have been applied to fix this, could it be integrated :) Best regards Kev
  8. K

    upgarding 2 nodes to corosync3

    I had tried earlier fiddling with netmtu, but decided to try setting some different values, (1500, 20000 , 65520) , all of which make nod difference we end up with Jul 29 14:40:14 kvm1 corosync[4039025]: [KNET ] pmtud: Aborting PMTUD process: Too many attempts. MTU might have changed during...
  9. K

    upgarding 2 nodes to corosync3

    Hi, Im having a peculiar issue. My nodes used infiniband in connected mode for cluster network, and storage (different interfaces) . The cluster network runs on a 10.1,100.x range, and the ipoib device looks like ib1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 65520 inet...
  10. K

    vm disk throttling

    i'll rephrase that.. I see in qemu if you have defined disk throttles, and include burstable limits, the burst limit is reduced to the normal limit after iops_max_length , ,bps_max_length, etc number of seconds. Is their any plans to expose these in the gui ?
  11. K

    vm disk throttling

    I see that with qemu the disk throttling for iops kicks in after iops_max_length number of seconds, would it be possible to create a parameter under the 'disk throttle' gui to control this ?
  12. K

    proxmox 5 ceph osd create

    answering my own question for any lost sole's that see the same problem, if you have upgraded from jewel to proxmox 5 stable (with ceph luminious) , then ensure you have a ceph mngr installed and working.
  13. K

    Cluster Nodes lost

    check corosync service is running on all nodes
  14. K

    proxmox 5 ceph osd create

    hard adding --filestore to API2/Ceph.pm eg my $cmd = ['ceph-disk', 'prepare', '--zap-disk', '--filestore' , '--fs-type', $fstype, does create the the osd with the filestore type. still have the other issue where new osd's added backfill, but the reported capacity the...
  15. K

    proxmox 5 ceph osd create

    I 'guess' its caused by this : http://ceph.com/releases/v12-1-0-luminous-rc-released/ " Notable Changes bluestore: ceph-disk: add –filestore argument, default to –bluestore (pr#15437, Loic Dachary, Sage Weil)" so --filestore now needs to be passed else bluestore will be created
  16. K

    proxmox 5 ceph osd create

    im already running proxmox 5 on all nodes, and filestore with luminous on all these node. There appears to however now be a bug where no matter what I do with pveceph createosd , a bluestore type is created
  17. K

    proxmox 5 ceph osd create

    pveceph createosd /dev/sdf -bluestore 0 -fstype xfs -journal_dev /dev/sda4 The ZFS modules are not loaded. Try running '/sbin/modprobe zfs' as root to load them. command '/sbin/zpool list -HPLv' failed: exit code 1 create OSD on /dev/sdf (xfs) using device '/dev/sda4' for journal Caution...
  18. K

    proxmox 5 ceph osd create

    pveceph createosd /dev/sdf -bluestore=0 -fstype=xfs -journal_dev /dev/sda4 does still appears to create a bluestore OSD !! , please advise.
  19. K

    Problem starting VM after moving to 5.0 stable

    HI, I have vm's that make use of the disk throttle option, after changiong from proxmox 5best to 5 stable when trying to start a vm the follwoign error was seen : Parameter 'throttling.iops-write' expects a number The config file for the vm was scsi0...
  20. K

    proxmox 5 ceph osd create

    HI there, I had been running the beta 5 and the deb http://download.proxmox.com/debian/pve stretch pvetest repo, last night on this node I added some new OSD's however when using pveceph createosd /dev/sdx -journal_dev /dev/sdxx it appears that the osd that was created was bluestore type (...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!