Search results

  1. D

    Best way to migrate large volume from Ceph to ZFS

    I guess the other way around is an option : add the Ceph storage of the old cluster as a remote Ceph on the new host, stop the VM on the old cluster and start it on the new host from the Ceph storage, then live move the disk to the ZFS storage
  2. D

    Proxmox on dedibox (scaleway) : ip failover

    I no longuer work in this company and sadly, it looks like my former employer decided to close access to all the repos.
  3. D

    WARNING! Upgrade to 7 network problem, do not upgrade!!!

    Before ranting, please take a closer look at the license under which PVE is released (and look for the warranties). Keep in mind a lot (most ?) users were able to handle the upgrade with no issue (including myself, on several servers). Major upgrades of any system requires basic administration...
  4. D

    VM Backup fails with "ERROR: Backup of VM 100 failed - job failed with err -5 - Input/output error"

    Hard to tell. Could be a failing disk on either the source or the destination. You should check the system logs
  5. D

    virtnbdbackup - backup utility for libvirt, using the latest changed block tracking features.

    You can install PBS on PVE already, it's not required to install it on a separate host
  6. D

    May have ruined data of one of my nodes...

    You should delete the actual RAID configuration (delete the VD again), then re-create the VD exactly as it was previously (same set of physical drives, in the same order, with the same chunk size etc. Everything must be exactly the same). Be careful not to initialize the VD when recreating it...
  7. D

    May have ruined data of one of my nodes...

    Depends on your RAID card if you'll be able to expand the RAID6 array with more PD. But at least, you might be able to get your data available, back them up, wipe and recreate the array on all the disks, and restore
  8. D

    May have ruined data of one of my nodes...

    You should recreate the RAID array with the exact configuration as it was previously (same PD, same order, same chunk size etc.)
  9. D

    huge dirty-bitmap after sunday

    This is most likely because you have a weekly fstrim job running, which will mark all the trimmed sectors as dirty. It shouldn't affect backup speed that much as all those "dirty" sectors will not really be red from disk
  10. D

    Fencing and "isolation response"

    In the doc, but also already tried ;-)
  11. D

    Fencing and "isolation response"

    Proxmox will self-fence the isolated host : host1 will be forcibly rebooted, so VM100 is for sure not running anymore. While host1 is rebooting, VM100 will be relocated to either host2 or host3
  12. D

    Very frustrated with Proxmox

    You can also check/replace the SATA cable. A bad one could also explain this kind of errors. In any case, it's most likely a hardware problem.
  13. D

    Very frustrated with Proxmox

    Your HDD or SSD /dev/sda is failing, you should replace it
  14. D

    VMs freezing and unreachable when backup server is slow

    That'd requier a lot of spare space, cpu and memory resources, and overall doesn't look like the proper way to handle the problem
  15. D

    LXC Slow update (CentOS 7)

    Lastlog can't be rotated, and it's a binary file. If it's removed, it'll be recreated with the same size (and still sparse). IMHO the only reasonable thing to do is to exclude it from backups
  16. D

    LXC Slow update (CentOS 7)

    You can do it creating /var/log/.pxarexclude containing a single line : lastlog (haven't tried, just red the doc)
  17. D

    LXC Slow update (CentOS 7)

    So just exclude /var/log/lastlog from your backups and you should be fine
  18. D

    LXC Slow update (CentOS 7)

    I mean, it could be sparse files present in the CentOS 7 CT, but not in the others
  19. D

    LXC Slow update (CentOS 7)

    Couldn't large sparse files inside those CT explains it ? I had similare issues with AD domain members having huge sparse /var/log/lastlog files