Search results

  1. M

    Proxmon 4.4 clusters /etc/pve/local/pve-ssl.key: failed to load

    please post ipconfigs of the servers was the server in the cluster before?
  2. M

    changing ceph public network

    after rebooting the osd services the io-output in the webinterface is shown. thanks udo
  3. M

    changing ceph public network

    managed to change it. but. not easy to do it. ceph mon getmap -o tmpfile monmaptool --print tmpfile monmaptool --rm 0 --rm 1 --rm 2 tmpfile monmaptool --add 1 10.255.247.13 --add 0 10.255.247.15 --add 2 10.255.247.16 tmpfile then stop the monitors and reload the monmap into the ceph...
  4. M

    changing ceph public network

    hi, i've installed ceph with 3 nodes, after doing some tests i figured out that i'd like to activate the performance a increasement i'd get when i add a third network device. How can i change the public network of ceph? changeing only the public network in ceph.conf isn't it right? kind regrads
  5. M

    ceph use a SW-Raid as journal

    hi, thanks a lot for your explanation. so i was right - loosing the journal will end up in a total corruption of the node (hoped that i was not right ;) ) in fact that my wiring is the bottleneck for my little cluster i'll leave the journal out of the system and run without it. there should not...
  6. M

    ceph use a SW-Raid as journal

    but what happens when this one single ssd fails. journal is gone and all of the data has to be rewritten, right?
  7. M

    ceph use a SW-Raid as journal

    got a three node cluster, with 4x3TB, 128GB RAM, 2x128GB SSDs, 128GBR RAM, NoRAID-Controller. Proxmox is setup on top off Debian-SW-RAID-Setup. Reason why i do it this way, quite simple. SW-Raid is in my opinion the cheapest way to get dataprotection. This setup should provide me the...
  8. M

    qcow2 corruption after snapshot or heavy disk I/O

    no - but i think the seagate is failing.
  9. M

    Hardware - Concept for Ceph Cluster + backup

    for ha purposes i'd say use a hardware raid1 for ssd and (if possible) add some 2 additional sata-harddrives into the server. use 2 of the harddrives for ceph and the raid1-ssd array for journal on these drives. create a pool with a replica of three you will get 18TB of usable Storage, thats...
  10. M

    qcow2 corruption after snapshot or heavy disk I/O

    did you check the SMART of your harddrives?
  11. M

    Install Windows 2016 Server on Proxmox VE (Video tutorial)

    virtio delivers the best performance on io what you could expect out of your hardware. Virtio creates multiple IO-Threads not just a single one like IDE.
  12. M

    ceph use a SW-Raid as journal

    Hi, i was wondering if it is possible to add an ssd-sw-raid device as journal for ceph? Adding it manually with pveceph works, but then the osd is not shown up in the osd-list from the webinterface and the device is marked as down. i know it is not recommended but in the other case just a...
  13. M

    Proxmox VE4.4, LVM-THIN, CEPH, SSD-Log

    is it also a bad idea to take an raid10-volume as osd?
  14. M

    Proxmox VE4.4, LVM-THIN, CEPH, SSD-Log

    ok - got it. i'll change the setup into 2x 500GB ZFS RAID1 for System and the other HardDrives as dedicated for ceph. But what about adding another node after creating the ceph-cluster. will this replicate the data to the new joining node?
  15. M

    Proxmox VE4.4, LVM-THIN, CEPH, SSD-Log

    Hi there, proxmox just uses lvm-thin volumes. I'm currently setting up my new VE-Environment. the plan is to use the lvm-thin volume for ceph (or a part of it) and an additional SSD Drive for CEPH-Meta. this setup should grow. That means i'd like to start with one server, then reinstall an...
  16. M

    Virtual Machine Disk

    install proxmox on the first disk. then create a lvm-storage with your raid1, mount it where ever you want. Create the Storage in Proxmox-Webinterface as a Directory. Then you can put your vms there.
  17. M

    [SOLVED] Migrating VBox to PVE4.2

    ok - and then makefs on that thin-vol. got it.
  18. M

    [SOLVED] Migrating VBox to PVE4.2

    got it lvcreate -V 150G --thin -n VMs pve/data mkfs.ext4 /dev/mapper/pve-VMs mount /dev/mapper/pve-VMs LOCATION
  19. M

    [SOLVED] Migrating VBox to PVE4.2

    Hi, i've to migrate some virtual machines from virtualbox to proxmox 4.2. Normally this wouldn't be a problem, but i'm struggeling with the lvm-thin storage-type which is used by default. how can i create a volume that i can format with ext4 and mount it to copy the virtualbox files on? Kind...