If you are you using the exact same raid card in the new server, the install will probably boot normally. If you're using a different raid card, this might be a problem/impossible.
and: this dat file stop growing on all of the failed CT's:
root@node004:~# du -s /var/lib/vz/dump/vzdump-lxc-138-2018_10_25-16_23_13.tar.dat
73201 /var/lib/vz/dump/vzdump-lxc-138-2018_10_25-16_23_13.tar.dat
It's always around (maybe exactly) the same size when it stops.
Some new info. I located some extra CT's that have this problem. They are all centos, but installed at different times and running all kinds of software. Some are big, some are small. I tried stop backup on 2 so far and that worked on both CT's.
I ran pct fsck on them, but found no errors. Any...
thx :) This is running in production, so can't do stop/suspend backup. I'm trying to clone it right now to try some other things.
syslog:
Oct 25 09:44:01 node007 pvedaemon[3776888]: command 'umount -l -d /mnt/vzsnap0/' failed: exit code 32
Oct 25 09:44:01 node007 pvedaemon[3776888]: ERROR...
Hi, I'm trying to run vzdump snapshot backups of my centos 7 lxc containers to a local ZFS raid 1 storage. The LXC containers run on CEPH bluestore.
BTW: I've also tried this to a shared NFS storage and a local LVM storage.
The backup of some containers hangs until I manually stop it. I think...
lnxbil: thx for your suggestion, The backup server was not yet in monitoring, so I'm lacking these metrics. I've decided to do a clean installation and try once more, just to be sure there wasn't a simple mistake in my configuration. I'll let you know if there is any difference.
janos: thanks...
Thanks for your reply. This is an e5-2609v3 with 16GB ECC RAM. 8 7200RPM SAS disks on a raid card with 1G memory. (write-back with BBU) connected over bonded dual 10GBe. The hardware was previously used as an primary iscsi storage for vmware (40+ vm's). So hardly any load and i/o reponds are...
I have a 5 node proxmox cluster with ceph storage running a mixture of lxc containers and qemu vm's. I have an extra server I want to configure as a backup storage and to mount extra partitions on the containers. (To have some extra storage for tar backups).
This server is connected to the...
I have a hyperconverged proxmox/ceph cluster of 5 nodes running the latest proxmox/ceph (bluestore) with 6 480GB SSD's each (totalling 30 OSD's) and I'm starting to run slow on storage. Is it possible and would it be wise to replace some (if not all) SSD's with bigger ones? Or if I added a node...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.