I am new to Proxmox, so this might be an known issue.
But if I move a VM to a new host, the backups see this as a new VM, and so do a full backup (rather than the incremental backup scheduled).
Is this by design ? can this behaviour be altered ??
Hi, Been using Proxmox on a test server and realize I want to commit a more substantial machine to using this awesome VE. Only to learn that differently named storage volumes cannot be migrated using the Web GUI.
Okay then, no problem.
So I try using the console and this happens:
Good day to whomever deem this thread worthy of guru knowledge!
I am trying to bulk migrate my VMs from one old node to another. However, I do see the issue is that drive names are different. It is trying to migrate from local-lvm to local-lvm, however, the name of new storage is "Main-VMs". Is...
I have a cluster with 6 nodes running on version 7.4, all nodes are Dell PowerEdge R630 and now I needed to add the seventh node which is Dell PowerEdge R640. Live migration from a VM residing on a R630 host to R640 goes fine, but when migrating from the new R640 host to R630 the VM...
I'm running a small PVE cluster of two nodes. Both have an encrypted ZFS dataset set up for container storage, using native ZFS encryption. This prevents migrating the containers from one node to another (https://bugzilla.proxmox.com/show_bug.cgi?id=2350).
However, if I create a directory...
whe have the current scenariio, pve 7.4.x in a cluster.
vm1 on proxhost 2:
lvs shows 28% in use from the 127GB disk on local thin storage. fstrim -av works.
migrate lvm1 to proxhost 01:
lvs shows 100% in use, this is a known issue, but running sudo fstrim -av in the vm normally results...
ich habe aktuell unter Proxmox 7.1-12 folgendes Problem:
Ich möchte VMs von Proxmox zu ESXi migrieren. Dazu habe ich bisher als Test die Festplatte einer Test-Windows10-VM mit dem Befehl qemu-img convert -f raw vm-107-disk-0 -O vmdk migration-test-eins.vmdk direkt in dem Pfad...
I want to migrate VM made on 6.3 Proxmox (upgraded later to 7.4) to another server that's on 6.3. VM was originally created on local-lvm storage. Destination server is on Ceph storage pool. Can I safely restore said VM on Ceph?
Hey, we build a new PVE ceph cluster and would now migrate all VMs from our old cluster.
In our old one we have ZFS and directory based storage (raw and qcow2 disks) mixed.
How can we do this? We already tested "qm remote-migrate" but this seems unable to migrate ZFS to CEPH Storage.
Is our only...
As the subject states, is there a way to move an existing CEPH cluster to Proxmox?
I have an existing cluster of 5 storage OSD nodes, 3 monitors, and a few virtualization hosts that I'd like to convert to proxmox.
While trying to migrate one of my VMs in our cluster i noticed some unusual outputs from the task viewer:
I've migrated a few vms before but never actually saw something like that before.
I checked the most common PVE services but they all seem to work fine. I would also like to...
Hi guys, I have the following problem:
I wanted to change the default SSH port from 22 to 2222, and I was able to do so by editing the sshd_config files.
However, even after changing the SSH port, when I try to migrate a VM between NODE_X and NODE_Y using ProxMox, it still uses port 22 and the...
Soon I'm going to be moving all my VMs from one node to new hardware. The existing local storage is LVM-thin, all VM disks are qcow.
The new hardware is going to be all new SSDs in a ZFS array.
I have a dedicated PBS node that has current backups of all my VMs and the host.
After some research...
We have a running PBS with an almost full storage (5 Tb), so we purchased a new dedicated server with bigger storage size.
We are facing difficulties to understand how to move (migrate) data from the old to the new PBS server
We would like to migrate backup data for sure, best would be to...
We have a Proxmox cluster comprising two identical large machines, with 120 cores and 370+ gig of memory, and with each machine having two ZFS disk pools, Tank1 of 50TB HDD, and Tank2 of 2TB of SSD, and a third less powerful machine with just one ZFS pool, a mirrored paid of 10TB disks...
ich möchte eine VMWare VM nach Proxmox mit qm importdisk migrieren, stehe aber bei auf dem Schlauch. Weil ich habe zwei disk in der VM.
Wenn ich eine Disk habe, mache ich es wie folgt
qm importdisk 999 /mnt/esx/vmfs/volumes/Datastore/WK/WK.vmdk local-zfs -format raw
Aber wie stelle ich es...
I've been looking into using Hooks in my setup to announce the VM IP addresses via BGP but unfortunately, the documentation is lacking of important information such as the behaviour during a migration.
I added my hook script to one VM, started a migration and noticed that the hook already...
I'm migrating my proxmox server to a new bigger server (more storage and compute power).
As a precaution I've setup cluster and added a third 'temporary' server to help migrating if things go south!
The migration was performed from old -> temp -> new, all within the cluster.