Search results

  1. P

    Windows Guest - Slow disk performance in RBD Pool

    I recently built a dev cluster to test ceph performance, using a Windows Server 2019 guest with CrystalDiskMark I am getting very very slow speeds in read and write testing. Reads:140MB/s vs 4000MB/s testing on a disk attached to NFS storage. Writes: 90MB/s vs 1643MB/s ceph.conf [global]...
  2. P

    Removed ceph, restarted node and all nodes went down. Why?!

    We had a node failure that took down the ceph manager service, i know there should have been more than one running but ceph -s said their were 2 on standby that never took over. Ceph was completely pooched and we had to do restorations from backup and luckily managed to recover some stuff from...
  3. P

    Unable to create Ceph monitor - No Active IP for public network

    I have a 12 node cluster, 6 at each of two location. Location one nodes use .2.0/24, the other .39.0/24 Nodes can all ping one another but when trying to create a ceph monitor on any node at the second location (.39) the error states: Multiple Ceph public networks detected on putsproxp07...
  4. P

    Prevent SystemD from renaming after upgrade.

    When I upgraded my test cluster from 6.x to 7.x there were no issues. Today when upgrading one of my production nodes it appears that systemd used a new naming structure and all my interfaces changes as follows: ens3f0 - enp175s0f0 ens3f1 - enp175s0f1 ens6f0 - enp24s0f0 ens6f1 - enp24s0f1...
  5. P

    Filter or modify displayed syslog?

    My syslog on all nodes is basically page after page of: Dec 12 13:03:27 putsproxp10 corosync[3147]: [KNET ] pmtud: Starting PMTUD for host: 7 link: 0 Dec 12 13:03:27 putsproxp10 corosync[3147]: [KNET ] udp: detected kernel MTU: 1500 Dec 12 13:03:27 putsproxp10 corosync[3147]: [KNET ]...
  6. P

    Rebooting VHD host for updates.

    Is there a best practice for restarting the host of the virtual disks? The boot drives are all held in a local volume that is replicated to all the nodes but the data/storage/database disks are housed on network attached storage. I'd like to avoid manually shutting down 100+ VMs running...
  7. P

    [SOLVED] vzdump fails - sysfs write failed

    vzdump backup all functioned until July 20, 2020. now all vzdump fail with similar error: INFO: starting new backup job: vzdump 101 --compress zstd --node putsproxp04 --remove 0 --mode snapshot --storage nas06p2 INFO: Starting Backup of VM 101 (qemu) INFO: Backup started at 2020-08-03 14:24:44...
  8. P

    After update of Nov 14 - monitors fail to start

    I ran the updates which installed a new kernel. after the reboot the monitor did not start. Attempted to start from command line: systemctl status ceph-mon@proxp01.service ● ceph-mon@proxp01.service - Ceph cluster monitor daemon Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled...
  9. P

    Ceph OSD folder found empty

    one of 4 nodes has lost the osd configuration all nodes are running: proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve) pve-manager: 6.0-6 (running version: 6.0-6/c71f879f) ceph: 14.2.2-pve1 ceph-fuse: 14.2.2-pve1 corosync: 3.0.2-pve2 The osd gui screen shows 4 (of 16) osd drives as down and...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!