Search results

  1. A

    [SOLVED] Backup client seems to "hang" when backing up bigger files

    I know it's possible to backup more than 30 or 40 MB with pbs, our 3.5TB datastore is near full. like I said, my test is simple. 1 file, 20 MB -> OK 2 files, 20MB each -> NOK 1 file 40MB, -> NOK So in 1.0.9 with one core, I can assure you, it cannot backup 40MB. I tested with two cores in 1.0.9...
  2. A

    [SOLVED] Backup client seems to "hang" when backing up bigger files

    I just forgot to mention that it seems to happen when the global size is larger than 30 or 40 Mo. if I create a 20M file, it can backup it. if I copy this file, It can backup the first one and hang on the second one. when I launch tcpdump, there is something when it's backuping, but...
  3. A

    [SOLVED] Backup client seems to "hang" when backing up bigger files

    Hi, I was having the exactly same problem. It was not happening on all servers. Version not working: 1.0.9 version working: 1.0.6 I uninstalled 1.0.9-1 that was not working and installed 1.0.6-1. Everything is working now. I also tested 1.0.8-1 with success. I thing there is something wrong in...
  4. A

    [SOLVED] ceph bad performances only from VM and CT

    First, I didn't reinstall all, I only reconstructed Ceph storage. Next, all my clients where on an infrastructure where storage was so slow (unusable) that I had to move then on a none shared storage, and where HA was not available anymore. Unfortunately, my infrastructure is rather busy on the...
  5. A

    [SOLVED] ceph bad performances only from VM and CT

    Hi, All my problems are now solved. I executed the next command on my VM last week. fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=2G --readwrite=randrw --rwmixread=75 Normally, I get these iops (or even...
  6. A

    [SOLVED] ceph bad performances only from VM and CT

    Hello, We are actually using proxmox on a 4 nodes cluster with a ceph storage with 2 OSD of 420GiB on each node except on the last node (2x900GiB). Our ceph cluster is communicating on a dedicated vlan on a 1Gb/s Vrack (OVH, french hosting platform) Since the 10th of december, we were having...