I would really appreciate a thread where there would be a continuous, complete status report about LXC online migration; I mean:
what have you done to make it work;
why it doesn't work now;
who and what should be done to make it work;
is there any timeframe for that;
i$ there any way to $peed...
Hi,
Whenever we try to Live Migrate a VM from one of our clustered servers to another, it usually works no problem.
There is one VM in particular that does not migrate. It gives the following error message:
2018-03-29 06:34:42 starting migration of VM 113 to node 'proxbarw04'...
I have tried the following:
root@node01-sxb-pve01:/vz/template/ova# qm importovf 109 CentOS7.2_64_kusanagi7.8.3.ovf local
Formatting '/var/lib/vz/images/109/vm-109-disk-1.raw', fmt=raw size=32212254720
(100.00/100%)
root@node01-sxb-pve01:/vz/template/ova# less CentOS7.2_64_kusanagi7.8.3.mf...
Hi,
after reading cluster manager and container documentation I'm wondering if container migration would be possible between cluster nodes even if there is no shared storage available. I'm not looking for HA. Anybody knows?
Thanks in advance.
2018-03-01 20:06:58 use dedicated network address for sending migration traffic (172.18.2.1)
2018-03-01 20:06:59 starting migration of VM 7113 to node 'node02-sxb-pve01' (172.18.2.1)
2018-03-01 20:06:59 found local disk 'local:7113/vm-7113-disk-1.raw' (in current VM config)
2018-03-01 20:06:59...
Hi,
the storage live migration creates a thick provisioned target even if a format like qcow2 is selected.
I only found the announcement for PVE 3.0 where someone mentioned to use qemu-img convert afterwards to create a sparse image file. But how would this be done on a running VM?
Hi,
I need to migrate a two node cluster from PVE 3.2 to 5.1.
The cluster uses a shared storage device connected via SAS to the nodes running GFS2 on top of the shared block devices.
GFS2 uses lock_dlm as locking manager, managed by the cluster manager of PVE.
Is it possible to install PVE 5.1...
We have a vm-template that was created by choosing "convert to template".
However the migration of this template fails with the following error:
2018-01-06 13:08:54 found local disk 'local-lvm:base-111-disk-1' (in current VM config)
2018-01-06 13:08:54 copying disk images
illegal name...
Hello,
I have a Dell C6220 server that hosts 3 nodes with configuration:
- Bi Xeon E5-2630L
- 32 GB RAM
- 1x500Gb (Proxmox 5.1-35) and 3x1To (Ceph)
- 1G link in private
This is just an installation to test the functioning of the CEPH and HA, so I created a cluster with the 3 nodes, I then used...
Hi all,
I'm wondering if it is possible to move/transfert a VM disk from local HDD to a SAN LVM over ISCSI storage without reinstalling VM from scratch..
I'm aware that in the first place i will need to convert my disk image from qcow2 to raw before attempting to import it in SAN storage.
Is...
Hello,
I have a environment with FreeNAS 11 and Xenserver 7.
The FreeNAS with iSCSI and NFS services are recognized with sucess on the Xenserver hypervisor.
I want migrate from Xenserver to Proxmox, but my question is about iSCSI disk.
What the process to this migration?
It's possible this...
Hi,
I plan to replace a stack of 2xswitchs. A 3 nodes cluster (with dedicated lacp interfaces for vmbr0 and ceph) is connected to this stack. I guess I have to shutdown the entire cluster for doing this. What are your recommendation for doing this properly ? (Stop all vms, then what just...
I have testing cluster with 3 nodes of PVE 5.0, i've managed to setup ZFS replication and HA.
1.) HA Failover was not working unless i created HA group and put my CT in it (originaly i was thinking that any running node will be used when no group is assigned to resource)
2.) When i manualy...
Hallo zusammen,
ich bin im Besitz eines HP MicroServer Gen8 mit 4x 3TB HDD als Z1 ZFS Raid laufend unter Ubuntu 16 auf einem USB Stick.
pool: zfspool
state: ONLINE
scan: scrub repaired 0 in 12h11m with 0 errors on Sun Sep 10 12:35:05 2017
config:
NAME...
Hello,
we've replaced one of the nodes in our cluster and now we can't do offline migration. Migration finishes in 10 seconds, GUI reports task results as successful ("TASK OK"), the VM in question is moved to a different node, but can't be started because the disk image is missing (it...
So, I have a node that dropped off the network last week. Turns out the switch port went bad.
I got it plugged into a different 10 gig port, and brought it back up. But, corasync kept restarting, and the node kept joining and leaving the quorum. I am guessing that perhaps the NIC is also...
Hi all,
I've copied over an old proxmox server to a new one the backup file.
Then I restore it via the gui but instead of storing it under the LVM which I got plenty of space it is under the local disk.
How can I transfer the existing VM under the LVM? All I need is to point/move the disk from...
This is a simple question, but I haven't been able to figure out a solid solution.
I run a simple Proxmox install with a zfs "pool" on 1 disk drive.
Now I want to move all of that data onto a new SSD drive.
How can I do this?
Here is the output of zpool list:
rpool/ROOT on /rpool/ROOT type...
This doesn't seem to be the already mentioned ssh problem (which is at 1:7.4p1-10 anyway):
Jul 11 18:34:38 copying disk images
Jul 11 18:34:38 starting VM 103 on remote node 'bowie'
Jul 11 18:34:40 start remote tunnel
Jul 11 18:34:40 starting online/live migration on...
Hi all,
Currently my setup is:
m.2 SSD with Proxmox install
SATA SSD with LVM VMs
I want to reinstall Proxmox over ZFS and am wondering how to move over my VMs.
Is it as simple as backing up to external media using the built in backup function and then restoring?
Is there a better way to go...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.