Search results

  1. H

    Bluestore WAL/DB migration script

    To whom it may be useful. I picked it up this script in the same forum and had modified it a little. It helped me move all my wal/db from HDD to SSD in my case and can be used to replace an older SSD with new SSD with wal/db. Just pass all OSD names and have a good night sleep. The process is...
  2. H

    Backup a disk drive

    Different system is also proxmox ? Why not backup whole VM and restore it there ?
  3. H

    Slow speed on SSD ZFS pool

    Try with Cache=Writeback
  4. H

    Proxmox server migration (from HDD to SSD)

    Simplest would be to just plug in SSDs and add them as new OSDs.
  5. H

    Machines are EXTREMELY slow

    Are you using Virtio Drivers ? https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers I had same issue but with Virtio Drivers it was fast. They had to be installed at the beginning of installation.
  6. H

    proxmox GUI for IAAS provider

    Some of the snapshots attached. How would you like a demo ? Some screen sharing ?
  7. H

    proxmox GUI for IAAS provider

    I have my proxmox integerated with WHMCS. Pretty neat interface. Doesn't beat digital ocean but does the all needed job. I can give you a demo if you like :)
  8. H

    Bluestore WAL/DB on SSD sizing

    Anyone o_O Pretty Please :) I was hoping to do conversion to SSD over weekend. Any insight/help is highly appreciated.
  9. H

    Bluestore WAL/DB on SSD sizing

    Hello all, I recently decided to use SSD in order to improve performance of my cluster. Here is my cluster setup 4 Nodes 36 HDD X 465 GB / node CPU(s) 8 x Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz (2 Sockets) /node RAM 128GB / node I wanted to move all my WAL DB to new SSD in order to improve...
  10. H

    Move Ceph journal to SSD?

    Ok I figured out how to run it but here is the error I am getting ./ceph_migration.sh "4" migrating OSD 4 Device: /dev/sdn /dev/sdp /dev/sdo bluestore found... exiting { "bluestore": 62 } osd.4 is already out. OSD(s) 4 are safe to destroy without reducing data durability...
  11. H

    Move Ceph journal to SSD?

    Hey Tommytom, Sorry for digging an old thread. Can you please explain how this script works. I recently installed and SSD and want to move my journal to it. what does this $1 indicate ? Can I run this script all OSDs ?
  12. H

    HA Cluster with VMs accessible by public IPs

    I have a similar setup and I have Public IP setup on my VMs directly (NIC setup as bridge on vmbr) . When one VM fails and migrates the VMs are still accessible via their public IP.
  13. H

    CEPH SSD Vs 10Gb Ethernet Upgrade

    Thank @LnxBil for all the right pointers. I have decided to go forward with these SSDs. I will post performance results once its upgraded. https://www.ebay.com/itm/Samsung-MZ-7LH240NE-883DCT-240GB-2-5-SATA3-V-NAND-3-bit-MLC-SSD/362449996254
  14. H

    CEPH SSD Vs 10Gb Ethernet Upgrade

    Thank you for all the assistance. The link you mentioned is very confusing it mentions about WAL + DB which I have no idea about. Neither does it mention like concrete steps to determine right size for journal :(
  15. H

    CEPH SSD Vs 10Gb Ethernet Upgrade

    How can I determine the right size for SSDs ?
  16. H

    CEPH SSD Vs 10Gb Ethernet Upgrade

    Thanks @LnxBil , my next question would be would I use SSDs just as journal disk ? If so can I use a single SSD/server for all my OSDs and how to replace existing journal disk which is HDD. I have 2 journal disks in each server. Sorry if thats a lot of questions. I am just looking for right...
  17. H

    CEPH SSD Vs 10Gb Ethernet Upgrade

    Hello All, I need some recommendation on improving my cluster. I have a 4 node cluster with CEPH having 65 OSDs at 30 TB capacity. My cluster communication is currently at 1G NIC and I dont have any SSDs in my servers. I recently had issue with disk I/O time with some web-servers applications...
  18. H

    VM started to fail after failed HA failover

    This tutorial helped me fix the issue. https://pve.proxmox.com/wiki/Moving_disk_image_from_one_KVM_machine_to_another
  19. H

    VM started to fail after failed HA failover

    Ah I see my fault there, Can I move that VM-106-disk-1 to other node and start up machine ?
  20. H

    VM started to fail after failed HA failover

    pveversion -v proxmox-ve: 5.1-38 (running kernel: 4.13.13-5-pve) pve-manager: 5.1-43 (running version: 5.1-43/bdb08029) pve-kernel-4.13.13-2-pve: 4.13.13-33 pve-kernel-4.13.13-5-pve: 4.13.13-38 libpve-http-server-perl: 2.0-8 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0: 1.0.1-1...