I have a VM with one quite big data disk. I want to move that VM to another Proxmox host in my cluster. The target system has enough space. Unfortunately the source system has not enough space anymore to create a snapshot. I can power down the VM and do any operation and it is not time...
I am having an unusual problem where, when I migrate a container from node 1 to node 2, it breaks. I migrate the container, the container shows up in node 2's VM list just fine, but when I try to start the container, it doesn't boot. It comes up with this errer
"TASK ERROR: unable to open file...
6.4.14 while trying to move qcow2 from NFS storage to local storage.
drive-scsi1: transferred 44.5 GiB of 300.0 GiB (14.82%) in 31m 21s
drive-scsi1: transferred 44.5 GiB of 300.0 GiB (14.83%) in 31m 22s
drive-scsi1: transferred 44.5 GiB of 300.0 GiB (14.84%) in 31m 24s
wir betreiben einen Proxmox Backup Server in einer VM auf einem Proxmox Virtualisierer.
Nach einer Migation der VM auf einen anderen Virtualisierer kam folgende Fehlermeldung:
TASK ERROR: could not activate storage 'pbs1-datastore1': pbs1-datastore1: error fetching datastores - 500...
I was doing some server maintenance today and migrating some containers/VMs and twice I ran into issues with migration due to the following error:
TASK ERROR: can't lock file '/var/lock/pve-manager/pve-migrate-xxx' - got timeout
This happened with 2 different servers and 2 different...
we had evaluated proxmox in development for a while. We have proxmox 5.4 on server, which we would like to "delete". We have obtained new HW and installed clean proxmox 7.1. We would like to migrate some VM machines from former hw to new one. Is there any easy/stright-forward solution...
I'm trying to migrate VM storage to Linstor SDS and have some odd troubles. All nodes are running PVE 7.1:
pve-manager/7.1-5/6fe299a0 (running kernel: 5.13.19-1-pve)
Linstor storage is, for now, on one host. When I create new VM on linstor it works. When I try to migrate VM from another host...
We have automatic VM migration configured, VM automatic migration works but after migration VM not starting at all it keeps starting all time. Manual migration stop start works but auto no. All info in attached file,
What’s the expected behavior here?
I have a 3-node cluster with dedicated physical corosync network, and a 2nd faster network for storage and networking. The corosync network is configured to failover to the fast network if interrupted.
High availability is configured on guests with shared...
I run a standalone Proxmox server which I have been upgrading in place for years. I am currently on 6.x, not having upgraded to 7 yet.
It was a clean install on Proxmox 4.2 I believe.
When i initially installed it, I set it up to boot from a ZFS mirror of two SATA SSD's using...
Since I needed to document the process anyway I thought I share my experience of migrating a domain controller from a hyper-v cluster to a proxmox cluster with ceph storage. I wrote this with these 2 wiki articles as background information...
I did a fresh proxmox install on a new ssd.
On installation it recognizes that there was a vg "pve" on an old hard disc and asked me to rename it to "pve--OLD..." , so I did.
The new Proxmox Server comes up, and I see that it somehow recognizes both hard disks.
But it doesn't show any of my...
So you need to move to the last version and you get the chills because it is such a huge jump in versions.
Worry no more it is quite simple!!!!!!
Just make regular backup of you VMs
Using ssh for instance with Filezilla, copy the backup files (just need the last backup of each) to your...
I'm running a PVE 5.2 cluster with 4 nodes. The cluster is attached to a SAN, an HP P2000 G3 iSCSI. VMs are hosted on the SAN.
The first controller of the SAN failed. Everything is running on the second controller, but I can't manage PVE anymore.
Although VMs are running, it seems that...
Just sharing a possible issue some people may run into in the future experiencing a broken pipe when migrating a VM, during a replicate operation, etc.
I recently upgraded the CPU and motherboard for my alternate node. There were some issues during the upgrade and considering I had...