Hi.
This is a normal backup made from the same server:
INFO: starting new backup job: vzdump 100 --remove 0 --mode snapshot --compress lzo --storage FR1 --node srv6
INFO: Starting Backup of VM 100 (openvz)
INFO: CTID 100 exist unmounted down
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/FR1/dump/vzdump-openvz-100-2017_07_23-23_55_32.tar.lzo'
INFO: Total bytes written: 1694976000 (1.6GiB, 8.7MiB/s)
INFO: archive file size: 654MB
INFO: Finished Backup of VM 100 (00:05:46)
INFO: Backup job finished successfully
TASK OK
Everything OK!
Now, to a bigger OpenVZ container:
INFO: starting new backup job: vzdump 122 --remove 0 --mode snapshot --compress lzo --storage FR1 --node srv6
INFO: Starting Backup of VM 122 (openvz)
INFO: CTID 122 exist mounted running
INFO: status = running
INFO: mode failure - unable to detect lvm volume group
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: starting first sync /var/lib/vz/private/122/ to /mnt/pve/FR1/dump/vzdump-openvz-122-2017_07_24-00_13_01.tmp
As far as I know. With Suspend, I will get downtime. The downtime is always around 30 minutes.
How can I prevent this? It's the exact same server... But on larger backups it cannot do snapshots.
Thanks
EDIT:
I saw that the first server was offline.
However, I still get downtime doing the snapshot on a larger server. Is there any other way?
ADDING:
root@srv6:~# lvdisplay
--- Logical volume ---
LV Path /dev/vg/home
LV Name home
VG Name vg
LV UUID ugJvL6-Dfq5-Deql-hIbV-tCew-TMx0-Z7NC61
LV Write Access read/write
LV Creation host, time rescue.ovh.net, 2017-06-27 22:09:30 +0200
LV Status available
# open 1
LV Size 97.65 GiB
Current LE 24999
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
root@srv6:~# vgdisplay
--- Volume group ---
VG Name vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 104.20 GiB
PE Size 4.00 MiB
Total PE 26674
Alloc PE / Size 24999 / 97.65 GiB
Free PE / Size 1675 / 6.54 GiB
VG UUID iqwsmJ-SJTT-Sw1t-btMr-jSQv-1PJJ-heyWrF
df -h on the OPENVZ server that is being back'ed up
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/simfs 300G 24G 277G 8% /
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 392K 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
This is a normal backup made from the same server:
INFO: starting new backup job: vzdump 100 --remove 0 --mode snapshot --compress lzo --storage FR1 --node srv6
INFO: Starting Backup of VM 100 (openvz)
INFO: CTID 100 exist unmounted down
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/FR1/dump/vzdump-openvz-100-2017_07_23-23_55_32.tar.lzo'
INFO: Total bytes written: 1694976000 (1.6GiB, 8.7MiB/s)
INFO: archive file size: 654MB
INFO: Finished Backup of VM 100 (00:05:46)
INFO: Backup job finished successfully
TASK OK
Everything OK!
Now, to a bigger OpenVZ container:
INFO: starting new backup job: vzdump 122 --remove 0 --mode snapshot --compress lzo --storage FR1 --node srv6
INFO: Starting Backup of VM 122 (openvz)
INFO: CTID 122 exist mounted running
INFO: status = running
INFO: mode failure - unable to detect lvm volume group
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: starting first sync /var/lib/vz/private/122/ to /mnt/pve/FR1/dump/vzdump-openvz-122-2017_07_24-00_13_01.tmp
As far as I know. With Suspend, I will get downtime. The downtime is always around 30 minutes.
How can I prevent this? It's the exact same server... But on larger backups it cannot do snapshots.
Thanks
EDIT:
I saw that the first server was offline.
However, I still get downtime doing the snapshot on a larger server. Is there any other way?
ADDING:
root@srv6:~# lvdisplay
--- Logical volume ---
LV Path /dev/vg/home
LV Name home
VG Name vg
LV UUID ugJvL6-Dfq5-Deql-hIbV-tCew-TMx0-Z7NC61
LV Write Access read/write
LV Creation host, time rescue.ovh.net, 2017-06-27 22:09:30 +0200
LV Status available
# open 1
LV Size 97.65 GiB
Current LE 24999
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
root@srv6:~# vgdisplay
--- Volume group ---
VG Name vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 104.20 GiB
PE Size 4.00 MiB
Total PE 26674
Alloc PE / Size 24999 / 97.65 GiB
Free PE / Size 1675 / 6.54 GiB
VG UUID iqwsmJ-SJTT-Sw1t-btMr-jSQv-1PJJ-heyWrF
df -h on the OPENVZ server that is being back'ed up
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/simfs 300G 24G 277G 8% /
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 392K 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
Last edited: