Search results

  1. M

    PGs Stale+Peering

    Hi, we got a problem with 5 PGs that are not rebuilding: ceph -s cluster: id: e907418e-7914-4e8c-9c0d-1bc300e92020 health: HEALTH_ERR 22227/1858782 objects misplaced (1.196%) Reduced data availability: 5 pgs inactive, 5 pgs peering, 5 pgs stale ceph pg...
  2. M

    pve-cluster resening xxx messages

    Hi, we got a problem with a cluster that got a stuck node, we couldn't reboot the host but we managed to bring it back working within the cluster. pvecm status looks beauty but all the hosts are showing red in the webinterface (instead of the node you're logging into). syslog is telling me...
  3. M

    conf destroyed - rebuild possible?

    uhh... wow - it happend a few days ago. so the backups are all empty... note nice... thank you.
  4. M

    conf destroyed - rebuild possible?

    ok - how can i do this, without restoring the whole vm?
  5. M

    conf destroyed - rebuild possible?

    Hi, i got a really untypical Problem. one of my vms config file is destroyed. its emtpy. My question is, is it possible to parse the VM-Process-parameters into a config file? kind regards
  6. M

    IPv4 Adressen Änderung

    nein um es sauber zu machen nicht. Man könnte nach der Änderung pve-cluster restart machen. Garantie das es damit getan ist, gibts aber nicht. müssen nicht noch die /etc/pve/.members angepasst werden? achja die hosts file jedes einzelnen servers.
  7. M

    After unplanned reboot locked VMs don't start

    i assume ragequit ;) stay calm gkovacs in normal behaviour you will have to do this just when your server crashes. This should'nt happen that often. Thats the reason why i gave you the advise to check your ram. I was reading your post. but - working around something that happens all the time is...
  8. M

    [SOLVED] Installation problem on Debian Jessie

    please take a look into /etc/hosts. the ip there should be the one of your server. debian jessie writes 127.0.1.1 there if this is the case - change it and dpkg-reconfigure all
  9. M

    After unplanned reboot locked VMs don't start

    why you don't just be happy that anyone replies to you? vzdump needs to lock the vm, don't look for something to blame proxmox for, look for the issue why your server reboots
  10. M

    Rollback snapshat from raw file

    for future issues like this. use a different vm-id for restore if you get an vm that is corrupt. you maybe need it to copy data to another or a new server.
  11. M

    Proxmon 4.4 clusters /etc/pve/local/pve-ssl.key: failed to load

    ping is not the only thing that has to work for a working cluster https://pve.proxmox.com/wiki/Multicast_notes => Using omping to test multicast
  12. M

    Restore Proxmox 1.9 .tgz backups to new 4.4 Server

    debian squeeze is not a supported version anymore. thats why you cannot migrate into lxc i'd prefere migration into kvm instead
  13. M

    Proxmon 4.4 clusters /etc/pve/local/pve-ssl.key: failed to load

    find the differnece: inet 129.138.16.182 inet 192.168.123.64 you will not be able to get multicast working properly on these different subnets right?
  14. M

    Problems getting vm to access web

    you have to visit ovh forum to find the answer
  15. M

    Restore Proxmox 1.9 .tgz backups to new 4.4 Server

    use --storage as option for the qmrestore-command to point to the right location.
  16. M

    FTP Backup for a OVH server

    you can mount it to any path you like. you just have to point your storage configuration to the right path afterwards.
  17. M

    Advice Regarding 5 node Proxmox Ceph Setup

    hi, i would'nt state me a guru but what i figured is, 10g in a high IO intense setup is a must. as you got. so what was your issues you run into before givin up?