I didn't restart any of services, they didn't crash either.
● proxmox-backup-proxy.service - Proxmox Backup API Proxy Server
Loaded: loaded (/lib/systemd/system/proxmox-backup-proxy.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2022-05-08 01:40:14 CEST...
When i disabled GC i've managed to create successfully HOST backups with PBS Client on CentOS that are usable. I didn't test it on VM backup but scenario was the same for VM and HOST backups with GC enabled. In both cases verify was failed and i wasn't able to access files inside backups with GC...
In my case it looks like that if task is taking longer then 24h and GC was run before backup is finished, GC cleans chunks from running backup that are older than 24h which ultimatly result in failed backup when i try to verify or use it.
Backup server is running on:
4 x Intel(R) Core(TM) i3-6100 CPU @ 3.70GHz (1 Socket) with 16 GB RAM
Datastore is on Western Digital Ultrastar DC HC520 12TB disks.
Not really speed monster but smaller backups are working fine.
It's longer then 24h for file backup but before this i was making VM backups that took 17h and also then i had problem but with dirty bitmap that was invalid.
I'm also making backups for, a smaller VM (~60GB) and there are no such problems there.
Hi,
it's fresh install of Backup Server 2.1-1 on VM under PBS 7.1-1. Datastore is ext4 on VM under ZFS mirror on PBS. And yes, i'm running scheduled GC while there is a backup running. Is it possible that GC is removing those chunks? Backup is very long (almost 48h for file backup) due to...
Hi,
I have strange issue with backing up of large VMs and even doing file backups with PBS client from machines with slow NL-SAS disks.
Two machines with about of 2 TB of data where backups takes about 17 to 24 hours have "missing chunks" or "bad chunks" errors in verify task. I have another...
Hi,
I have Pve 6 cluster on two nodes. Both have same network configuration and one of those nodes have vlan issue. it looks like that bond0 interface is not aware about tagged vlans configured on vmbr0 attached to this bond.
Here is network configuration from a working node:
auto lo
iface lo...
Hi,
Sure, i agree that LVM Storage is best for online migration, but i have need for 5 TB volume mounted into specific VM. Online migration with data copy is impossible in that case not only because of needed time for such task, but most important in that case i would need to attach not only...
Thanks for the answer.
Unfortunately, it only works offline due to an error:
2019-08-23 13:51:32 starting migration of VM 402 to node 'proxmox5' (172.19.0.6)
2019-08-23 13:51:32 found local disk 'huawei-lvm:vm-402-disk-0' (in current VM config)
2019-08-23 13:51:32 copying disk images...
Hi,
I have question about migrating VM with direct attached LUN.
This is my config:
bootdisk: scsi0
cores: 2
ide2: none,media=cdrom
memory: 16384
name: sowa
net0: virtio=FE:5F:80:5D:76:C1,bridge=vmbr0,firewall=1,tag=380
numa: 0
onboot: 1
ostype: l26
scsi0: huawei-lvm:vm-402-disk-0,size=32G...
Is there any feature that allows change hard disk names of virtual machines? Unfortunately my fellow created disks on separated storages for few VMs and i can't move them to single storage now because they have same names.
agent: 1
boot: c
bootdisk: ide0
cores: 16
cpu: host
cpuunits: 8192...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.