Search results

  1. C

    Errors into backup

    The config is: bootdisk: scsi0 cores: 6 lock: backup memory: 3072 name: nfs01-mail.storage.xxxxx.net net0: e1000=5A:7D:03:27:B0:62,bridge=vmbr1 onboot: 1 ostype: l26 scsi0: local:106/vm-106-disk-1.qcow2,format=qcow2,cache=writethrough,size=20G scsi1...
  2. C

    Errors into backup

    Hi. I upgrade to Proxmox 3.0. Now when try to backup I have errors and It can´t backup. Example: ERROR: Backup of VM 106 failed - command 'qm set 106 --lock backup' failed: exit code 25 ¿Any idea how fix this?
  3. C

    Clustered storage status (ceph & sheepdog)

    Please upgrade ceph to v0.56. I have some issues with 0.48 client and 0.56 servers. Thanks.
  4. C

    Clustered storage status (ceph & sheepdog)

    Currently I test ceph for some weeks and this works fine. Have errors when run backups and have selected backup the VM. Need to check "No Backup" for the ceph images. In the next days ceph will release a major stable version with some performance improvements and is a good idea upgrade to this.
  5. C

    Clustered storage status (ceph & sheepdog)

    Thanks. Then add parameters, need to reboot proxmox server or shutdown/start the VM? The setup of disk cache works fine with ceph?
  6. C

    Clustered storage status (ceph & sheepdog)

    I need to create /etc/ceph.conf ? In my servers the file /etc/ceph.conf don´t exist.
  7. C

    Clustered storage status (ceph & sheepdog)

    Hi. I'm testing ceph into proxmox. I need to setup the rbd_cache_size, rbd_cache_max_age and type. Is it possible, how? I check low performance and maybe is issue of the cache setup.
  8. C

    Clustered storage status (ceph & sheepdog)

    Can I add the same rbd pool in all the nodes of the cluster?
  9. C

    Clustered storage status (ceph & sheepdog)

    I run a new full-upgrade and now works. pvesm status backup nfs 1 1457861632 955720704 429176832 69.51% images nfs 1 1457861632 955720704 429176832 69.51% local dir 1 349130060 199400 348930660 0.56% web01 rbd 1...
  10. C

    Clustered storage status (ceph & sheepdog)

    file /etc/pve/storage.cfg line 22 (skip section 'web01'): unsupported type 'rbd' backup nfs 1 1457861632 955720704 429176832 69.51% images nfs 1 1457861632 955720704 429176832 69.51% local dir 1 349130060 199400 348930660 0.56%...
  11. C

    Clustered storage status (ceph & sheepdog)

    I have some issue. I use this tutorial: http://pve.proxmox.com/wiki/Storage:_Ceph but I can´t find the rbd storage when create a VM or check the storages. How I refresh the storage.cfg into the gui?
  12. C

    Super slow Backup with pve 2.x

    Same issue here. Slow speed for KVM VPS backup via NFS into 2.1
  13. C

    Clustered storage status (ceph & sheepdog)

    Any new about the version 0.48?
  14. C

    Clustered storage status (ceph & sheepdog)

    Is possible use ceph repo to upgrade ceph-common? Are compatible?
  15. C

    Clustered storage status (ceph & sheepdog)

    I check. The ceph-common is 0.47.2. Is possible that update to 0.48? Is the major stable release of ceph and change the internal mode of work.
  16. C

    Clustered storage status (ceph & sheepdog)

    I will test it. This support ceph 0.48?
  17. C

    Clustered storage status (ceph & sheepdog)

    Perfect. I will test when put in one week aprox a new server. Currently I only have proxmox with vps into production.
  18. C

    Clustered storage status (ceph & sheepdog)

    Thanks! Any idea when will be release? And what storage options will have. Example, attach an rbd volume to a VM.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!