backing up and cloning a VM with ID XX gets ID YY cloned or backed up and more

hanscees

New Member
Mar 21, 2024
10
3
3
Hi.
I have been using esxi for ages and now migrating to proxmox. I have one proxmox VE 8.1.4 on bare metal. No clusters or whatever.

On the proxmox VE I began to set up two basis vm systems with a docker install: alpine and debian based. When I make a backup alpine is about 300-600 mb and debian is about 1.7G.

From these two systems I am cloning new systems for all kinds of functionality. All went fine for a week or so, I now have about 11 systems. I do everything by using the gui.

However, yesterday I set up a new alpine based time server and noticed the backup was 1.7G. This got me suspicious.
When researching further I thought, lets do that step by step again and see how big the backup becomes.
It turns out when I cloned my basis alpine (id 105) to a new server again (ID 111) and connected to tht ID 111 the shell on the gui connected to ID 110, not 111.
When I connect with the gui shell to ID 110 I get the shell on ID 102.

So the gui is making a mess of it. And the bacxkups I make are not always the systems I think I am back-uping apparently.

Who can help me fix and understand this mess?
The only errors I have seen was when I wanted to stop vm's by using the gui they didnt stop. I then logged in with ssh and killed the processes showing that ID in ps waux | egrep XX

Perhaps that messed up things?

Also the logging form the backup that 1.7G but should be 700mb has this curious line: pending configuration changes found (not included into backup)

```

vzdump 110 --mode snapshot --node pve --storage backup1tb --remove 0 --notification-mode auto --compress zstd --notes-template '{{guestname}}'


110: 2024-03-20 22:21:30 INFO: Starting Backup of VM 110 (qemu)
110: 2024-03-20 22:21:30 INFO: status = running
110: 2024-03-20 22:21:30 INFO: VM Name: alp24-time
110: 2024-03-20 22:21:30 INFO: include disk 'scsi0' 'local-lvm:vm-110-disk-0' 14G
110: 2024-03-20 22:21:31 INFO: backup mode: snapshot
110: 2024-03-20 22:21:31 INFO: ionice priority: 7
110: 2024-03-20 22:21:31 INFO: pending configuration changes found (not included into backup)
110: 2024-03-20 22:21:31 INFO: creating vzdump archive '/mnt/pve/backup1tb/dump/vzdump-qemu-110-2024_03_20-22_21_30.vma.zst'
110: 2024-03-20 22:21:31 INFO: issuing guest-agent 'fs-freeze' command
110: 2024-03-20 22:21:31 INFO: issuing guest-agent 'fs-thaw' command
110: 2024-03-20 22:21:31 INFO: started backup task 'ed766ee5-d921-4659-90f3-a80e1f01caa3'
```

Help!
 
I have done some analysis.
Id 107, 108 and 109 are fine

ID 110 has a problem (the backup is wrong, console goes to 102, which should not be running)
ID 111 has a problem ( console goes to name of id 110)

look at disk of 110, no .cow2?

root@pve:~# ps waux | egrep debug-threads | awk -F, '{print $1, $72, $73, $77}'
root 1184 1.2 15.9 3139620 958944 ? Sl mrt01 350:27 /usr/bin/kvm -id 103 -name deb-2024-nagios id=ide2 bootindex=101 -device virtio-scsi-pci iothread=iothread-virtioscsi0 -drive file=/mnt/pve/backup1tb/images/103/vm-103-disk-0.qcow2
root 54203 0.0 0.0 6468 2304 pts/0 S+ 21:30 0:00 grep -E debug-threads
root 2894683 11.7 10.7 4292076 645664 ? Rl mrt15 1075:31 /usr/bin/kvm -id 107 -name alp24-agenda addr=0x1 iothread=iothread-virtioscsi0 -drive file=/mnt/pve/backup1tb/images/107/vm-107-disk-0.qcow2 cache=none
root 3608751 11.8 8.8 4864136 532424 ? Sl mrt18 512:53 /usr/bin/kvm -id 108 -name alp24-hccom addr=0x1 iothread=iothread-virtioscsi0 -drive file=/mnt/pve/backup1tb/images/108/vm-108-disk-0.qcow2 cache=none
root 3614483 11.7 8.8 4765496 529688 ? Sl mrt18 502:53 /usr/bin/kvm -id 109 -name alp24-mastBot addr=0x1 iothread=iothread-virtioscsi0 -drive file=/mnt/pve/backup1tb/images/109/vm-109-disk-0.qcow2 cache=none
root 4029047 0.8 15.3 3296524 924200 ? Sl mrt20 11:58 /usr/bin/kvm -id 110 -name alp24-time id=ide2 bootindex=101 -device virtio-scsi-pci iothread=iothread-virtioscsi0 -drive file=/dev/pve/vm-110-disk-0
root 4044093 13.0 8.8 3097648 531996 ? Sl mrt20 180:21 /usr/bin/kvm -id 111 -name alp24time2 iothread=iothread-virtioscsi0 -drive file=/mnt/pve/backup1tb/images/111/vm-111-disk-0.qcow2 if=none cache=none
 
here is the rest of the ps waux.

are these lines normal:
root 4029051 0.0 0.0 0 0 ? S mrt20 0:00 [kvm-nx-lpage-recovery-4029047]


root@pve:~# ps waux | egrep -v debug-threads
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 168536 9720 ? Ss mrt01 0:08 /sbin/init
root 2 0.0 0.0 0 0 ? S mrt01 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? I< mrt01 0:00 [rcu_gp]
root 4 0.0 0.0 0 0 ? I< mrt01 0:00 [rcu_par_gp]
root 5 0.0 0.0 0 0 ? I< mrt01 0:00 [slub_flushwq]
root 6 0.0 0.0 0 0 ? I< mrt01 0:00 [netns]
root 8 0.0 0.0 0 0 ? I< mrt01 0:00 [kworker/0:0H-events_highpri]
root 11 0.0 0.0 0 0 ? I< mrt01 0:00 [mm_percpu_wq]
root 12 0.0 0.0 0 0 ? I mrt01 0:00 [rcu_tasks_kthread]
root 13 0.0 0.0 0 0 ? I mrt01 0:00 [rcu_tasks_rude_kthread]
root 14 0.0 0.0 0 0 ? I mrt01 0:00 [rcu_tasks_trace_kthread]
root 15 0.0 0.0 0 0 ? S mrt01 0:16 [ksoftirqd/0]
root 16 0.0 0.0 0 0 ? I mrt01 2:25 [rcu_preempt]
root 17 0.0 0.0 0 0 ? S mrt01 0:07 [migration/0]
root 18 0.0 0.0 0 0 ? S mrt01 0:00 [idle_inject/0]
root 19 0.0 0.0 0 0 ? S mrt01 0:00 [cpuhp/0]
root 20 0.0 0.0 0 0 ? S mrt01 0:00 [cpuhp/1]
root 21 0.0 0.0 0 0 ? S mrt01 0:00 [idle_inject/1]
root 22 0.0 0.0 0 0 ? S mrt01 0:16 [migration/1]
root 23 0.0 0.0 0 0 ? S mrt01 0:18 [ksoftirqd/1]
root 25 0.0 0.0 0 0 ? I< mrt01 0:00 [kworker/1:0H-kblockd]
root 26 0.0 0.0 0 0 ? S mrt01 0:00 [cpuhp/2]
root 27 0.0 0.0 0 0 ? S mrt01 0:00 [idle_inject/2]
root 28 0.0 0.0 0 0 ? S mrt01 0:14 [migration/2]
root 29 0.0 0.0 0 0 ? S mrt01 0:16 [ksoftirqd/2]
root 31 0.0 0.0 0 0 ? I< mrt01 0:00 [kworker/2:0H-kblockd]
root 32 0.0 0.0 0 0 ? S mrt01 0:00 [cpuhp/3]
root 33 0.0 0.0 0 0 ? S mrt01 0:00 [idle_inject/3]
root 34 0.0 0.0 0 0 ? S mrt01 0:07 [migration/3]
root 35 0.0 0.0 0 0 ? S mrt01 0:13 [ksoftirqd/3]
root 37 0.0 0.0 0 0 ? I< mrt01 0:00 [kworker/3:0H-events_highpri]
root 38 0.0 0.0 0 0 ? S mrt01 0:00 [kdevtmpfs]
root 39 0.0 0.0 0 0 ? I< mrt01 0:00 [inet_frag_wq]
root 41 0.0 0.0 0 0 ? S mrt01 0:00 [kauditd]
root 42 0.0 0.0 0 0 ? S mrt01 0:00 [khungtaskd]
root 43 0.0 0.0 0 0 ? S mrt01 0:00 [oom_reaper]
root 45 0.0 0.0 0 0 ? I< mrt01 0:00 [writeback]
root 46 0.0 0.0 0 0 ? S mrt01 3:20 [kcompactd0]
root 47 0.3 0.0 0 0 ? SN mrt01 114:53 [ksmd]
root 49 0.0 0.0 0 0 ? SN mrt01 2:37 [khugepaged]
root 50 0.0 0.0 0 0 ? I< mrt01 0:00 [kintegrityd]
root 51 0.0 0.0 0 0 ? I< mrt01 0:00 [kblockd]
root 52 0.0 0.0 0 0 ? I< mrt01 0:00 [blkcg_punt_bio]
root 55 0.0 0.0 0 0 ? I< mrt01 0:00 [tpm_dev_wq]
root 56 0.0 0.0 0 0 ? I< mrt01 0:00 [ata_sff]
root 57 0.0 0.0 0 0 ? I< mrt01 0:00 [md]
root 58 0.0 0.0 0 0 ? I< mrt01 0:00 [md_bitmap]
root 59 0.0 0.0 0 0 ? I< mrt01 0:00 [edac-poller]
root 60 0.0 0.0 0 0 ? I< mrt01 0:00 [devfreq_wq]
root 61 0.0 0.0 0 0 ? S mrt01 0:00 [watchdogd]
root 62 0.0 0.0 0 0 ? I< mrt01 1:07 [kworker/0:1H-kblockd]
root 63 0.0 0.0 0 0 ? S mrt01 0:15 [kswapd0]
root 64 0.0 0.0 0 0 ? S mrt01 0:00 [ecryptfs-kthread]
root 65 0.0 0.0 0 0 ? I< mrt01 0:00 [kthrotld]
root 66 0.0 0.0 0 0 ? I< mrt01 0:00 [acpi_thermal_pm]
root 67 0.0 0.0 0 0 ? S mrt01 0:00 [scsi_eh_0]
root 68 0.0 0.0 0 0 ? I< mrt01 0:00 [scsi_tmf_0]
root 69 0.0 0.0 0 0 ? S mrt01 0:00 [scsi_eh_1]
root 70 0.0 0.0 0 0 ? I< mrt01 0:00 [scsi_tmf_1]
root 72 0.0 0.0 0 0 ? I< mrt01 0:00 [mld]
root 74 0.0 0.0 0 0 ? I< mrt01 0:00 [ipv6_addrconf]
root 82 0.0 0.0 0 0 ? I< mrt01 0:00 [kstrp]
root 84 0.0 0.0 0 0 ? I< mrt01 0:00 [kworker/u17:0]
root 88 0.0 0.0 0 0 ? I< mrt01 0:00 [charger_manager]
root 89 0.0 0.0 0 0 ? I< mrt01 1:06 [kworker/1:1H-kblockd]
root 90 0.0 0.0 0 0 ? I< mrt01 5:09 [kworker/2:1H-kblockd]
root 132 0.0 0.0 0 0 ? I< mrt01 1:11 [kworker/3:1H-kblockd]
root 174 0.0 0.0 0 0 ? S mrt01 0:00 [scsi_eh_2]
root 175 0.0 0.0 0 0 ? I< mrt01 0:00 [scsi_tmf_2]
root 176 0.0 0.0 0 0 ? S mrt01 8:23 [usb-storage]
root 177 0.0 0.0 0 0 ? I< mrt01 0:00 [uas]
root 180 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:0]
root 181 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:1]
root 189 0.0 0.0 0 0 ? I< mrt01 0:00 [dm_bufio_cache]
root 190 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:2]
root 191 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:3]
root 207 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:4]
root 208 0.0 0.0 0 0 ? I< mrt01 0:00 [kcopyd]
root 209 0.0 0.0 0 0 ? I< mrt01 0:00 [dm-thin]
root 210 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:5]
root 212 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:6]
root 223 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:7]
root 229 0.0 0.0 0 0 ? I< mrt01 0:00 [kdmflush/252:8]
root 283 0.0 0.0 0 0 ? S mrt01 0:38 [jbd2/dm-1-8]
root 284 0.0 0.0 0 0 ? I< mrt01 0:00 [ext4-rsv-conver]
root 338 0.0 0.4 49484 26368 ? Ss mrt01 0:04 /lib/systemd/systemd-journald
root 353 0.0 0.0 0 0 ? S< mrt01 0:00 [spl_system_task]
root 354 0.0 0.0 0 0 ? S< mrt01 0:00 [spl_delay_taskq]
root 355 0.0 0.0 0 0 ? S< mrt01 0:00 [spl_dynamic_tas]
root 356 0.0 0.0 0 0 ? S< mrt01 0:00 [spl_kmem_cache]
root 359 0.0 0.4 80580 25104 ? SLsl mrt01 1:34 /sbin/dmeventd -f
root 362 0.0 0.1 27720 6560 ? Ss mrt01 0:02 /lib/systemd/systemd-udevd
root 366 0.0 0.0 0 0 ? S< mrt01 0:00 [zvol]
root 367 0.0 0.0 0 0 ? S mrt01 0:00 [arc_prune]
root 368 0.0 0.0 0 0 ? S mrt01 0:39 [arc_evict]
root 369 0.0 0.0 0 0 ? SN mrt01 0:39 [arc_reap]
root 370 0.0 0.0 0 0 ? S mrt01 0:00 [dbu_evict]
root 371 0.0 0.0 0 0 ? SN mrt01 0:38 [dbuf_evict]
root 410 0.0 0.0 0 0 ? SN mrt01 0:00 [z_vdev_file]
root 519 0.0 0.0 0 0 ? S mrt01 0:00 [irq/30-mei_me]
root 604 0.0 0.0 0 0 ? I< mrt01 0:00 [ttm]
root 618 0.0 0.0 0 0 ? S mrt01 0:00 [card0-crtc0]
root 619 0.0 0.0 0 0 ? S mrt01 0:00 [card0-crtc1]
root 620 0.0 0.0 0 0 ? S mrt01 0:00 [card0-crtc2]
root 621 0.0 0.0 0 0 ? S mrt01 0:00 [card0-crtc3]
root 712 0.0 0.0 0 0 ? S mrt01 0:38 [jbd2/sda1-8]
root 713 0.0 0.0 0 0 ? I< mrt01 0:00 [ext4-rsv-conver]
root 716 0.0 0.0 0 0 ? S mrt01 0:25 [l2arc_feed]
_rpc 792 0.0 0.0 7876 3840 ? Ss mrt01 0:01 /sbin/rpcbind -f -w
message+ 799 0.0 0.0 9268 4608 ? Ss mrt01 0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root 802 0.0 0.0 152748 1792 ? Ssl mrt01 0:00 /usr/bin/lxcfs /var/lib/lxcfs
root 803 0.0 0.0 278156 3584 ? Ssl mrt01 0:00 /usr/lib/x86_64-linux-gnu/pve-lxc-syscalld/pve-lxc-syscalld --system /run/pve/lxc-syscalld.sock
root 806 0.0 0.1 11852 6144 ? Ss mrt01 0:00 /usr/sbin/smartd -n -q never
root 808 0.0 0.1 49944 6784 ? Ss mrt01 0:02 /lib/systemd/systemd-logind
root 809 0.0 0.0 0 0 ? I< mrt01 0:00 [rpciod]
root 810 0.0 0.0 0 0 ? I< mrt01 0:00 [xprtiod]
root 814 0.0 0.0 2332 1280 ? Ss mrt01 1:00 /usr/sbin/watchdog-mux
root 816 0.0 0.0 166924 5248 ? Ssl mrt01 0:00 /usr/sbin/zed -F
root 817 0.0 0.0 7068 1740 ? S mrt01 0:39 /bin/bash /usr/sbin/ksmtuned
root 826 0.0 0.0 5308 1920 ? Ss mrt01 0:00 /usr/sbin/qmeventd /var/run/qmeventd.sock
root 853 0.0 0.0 0 0 ? I< mrt01 0:00 [tls-strp]
root 913 0.0 0.0 5024 2176 ? Ss mrt01 0:00 /usr/libexec/lxc/lxc-monitord --daemon
root 927 0.0 0.0 5876 1920 tty1 Ss+ mrt01 0:00 /sbin/agetty -o -p -- \u --noclear - linux
root 937 0.0 0.1 15408 9216 ? Ss mrt01 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
_chrony 952 0.0 0.0 18860 3208 ? S mrt01 0:07 /usr/sbin/chronyd -F 1
_chrony 963 0.0 0.0 10532 2580 ? S mrt01 0:00 /usr/sbin/chronyd -F 1
root 1000 0.0 0.0 727364 3296 ? Ssl mrt01 4:44 /usr/bin/rrdcached -B -b /var/lib/rrdcached/db/ -j /var/lib/rrdcached/journal/ -p /var/run/rrdcached.pid -l unix:/var/run/rrdcached.sock
root 1025 0.0 0.6 821768 40796 ? Ssl mrt01 22:37 /usr/bin/pmxcfs
root 1106 0.0 0.0 42656 4500 ? Ss mrt01 0:06 /usr/lib/postfix/sbin/master -w
postfix 1108 0.0 0.1 43092 6912 ? S mrt01 0:01 qmgr -l -t unix -u
root 1113 0.0 0.0 6612 2560 ? Ss mrt01 0:04 /usr/sbin/cron -f
root 1120 0.2 0.5 157012 35112 ? Ss mrt01 67:35 pve-firewall
root 1123 0.5 0.7 151976 42712 ? Ss mrt01 148:27 pvestatd
root 1125 0.0 0.0 2460 1024 ? S mrt01 0:38 bpfilter_umh
root 1150 0.0 0.4 232940 28368 ? Ss mrt01 0:27 pvedaemon
root 1158 0.0 0.2 219044 12108 ? Ss mrt01 2:59 pve-ha-crm
www-data 1159 0.0 2.7 234184 163968 ? Ss mrt01 0:43 pveproxy
www-data 1166 0.0 1.0 80816 63360 ? Ss mrt01 0:31 spiceproxy
root 1168 0.0 0.5 218472 35356 ? Ss mrt01 5:00 pve-ha-lrm
root 1189 0.0 0.0 0 0 ? S mrt01 0:00 [kvm-nx-lpage-recovery-1184]
root 1244 0.0 0.0 0 0 ? S mrt01 0:00 [kvm-pit/1184]
root 1286 0.0 0.4 214172 24760 ? Ss mrt01 1:58 pvescheduler
root 1297 0.0 0.0 0 0 ? I< mrt01 0:00 [dio/sda1]
root 24511 0.0 0.0 0 0 ? I 18:10 0:01 [kworker/2:0-events]
root 24541 0.0 0.0 0 0 ? I 18:10 0:00 [kworker/1:2]
root 39323 0.0 0.0 0 0 ? I 19:50 0:00 [kworker/u16:4-events_freezable_power_]
root 42664 0.0 0.0 0 0 ? I 20:13 0:00 [kworker/u16:0-events_unbound]
root 44808 0.0 0.0 0 0 ? I 20:28 0:00 [kworker/u16:2-events_freezable_power_]
root 49248 0.0 0.0 0 0 ? I 20:58 0:00 [kworker/3:1-rcu_gp]
root 49249 0.0 0.0 0 0 ? I 20:58 0:00 [kworker/0:0-events]
root 49482 0.0 0.0 0 0 ? I 20:59 0:00 [kworker/2:2-cgroup_destroy]
root 49525 0.0 0.0 0 0 ? I 21:00 0:00 [kworker/u16:1-dm-thin]
postfix 49624 0.0 0.1 43056 6784 ? S 21:00 0:00 pickup -l -t unix -u -c
www-data 49947 0.1 2.4 243120 147116 ? S 21:02 0:03 pveproxy worker
root 50270 0.0 0.1 17968 11108 ? Ss 21:04 0:00 sshd: root@pts/0
root 50296 0.0 0.1 19376 10752 ? Ss 21:04 0:00 /lib/systemd/systemd --user
root 50297 0.0 0.0 169596 4848 ? S 21:04 0:00 (sd-pam)
root 50315 0.0 0.0 8112 4736 pts/0 Ss 21:04 0:00 -bash
root 51167 0.1 0.7 241540 44396 ? S 21:10 0:02 pvedaemon worker
root 51285 0.3 0.7 241536 45680 ? S 21:11 0:05 pvedaemon worker
root 51581 0.3 0.7 241424 44016 ? S 21:13 0:04 pvedaemon worker
root 52225 0.0 0.0 0 0 ? I 21:17 0:00 [kworker/u16:3-events_freezable_power_]
www-data 52849 0.1 2.4 242968 146348 ? S 21:21 0:01 pveproxy worker
www-data 53081 0.1 2.4 242964 146092 ? S 21:22 0:01 pveproxy worker
root 54274 0.0 0.0 0 0 ? I 21:30 0:00 [kworker/u16:5-events_power_efficient]
root 55064 0.0 0.0 5468 1792 ? S 21:36 0:00 sleep 60
root 55180 0.0 0.0 11092 4736 pts/0 R+ 21:37 0:00 ps waux
root 2331083 0.0 0.0 0 0 ? I< mrt12 0:00 [kdmflush/252:9]
root 2894688 0.0 0.0 0 0 ? S mrt15 0:00 [kvm-nx-lpage-recovery-2894683]
root 2894742 0.0 0.0 0 0 ? S mrt15 0:00 [kvm-pit/2894683]
root 3608756 0.0 0.0 0 0 ? S mrt18 0:00 [kvm-nx-lpage-recovery-3608751]
root 3608813 0.0 0.0 0 0 ? S mrt18 0:00 [kvm-pit/3608751]
root 3614488 0.0 0.0 0 0 ? S mrt18 0:00 [kvm-nx-lpage-recovery-3614483]
root 3614544 0.0 0.0 0 0 ? S mrt18 0:00 [kvm-pit/3614483]
root 4028214 0.0 0.0 0 0 ? I< mrt20 0:00 [kdmflush/252:10]
root 4029051 0.0 0.0 0 0 ? S mrt20 0:00 [kvm-nx-lpage-recovery-4029047]
root 4029088 0.0 0.0 0 0 ? I mrt20 0:14 [kworker/3:2-events]
root 4029106 0.0 0.0 0 0 ? S mrt20 0:00 [kvm-pit/4029047]
root 4043276 0.0 0.0 0 0 ? I mrt20 0:04 [kworker/1:1-events]
root 4044098 0.0 0.0 0 0 ? S mrt20 0:00 [kvm-nx-lpage-recovery-4044093]
root 4044152 0.0 0.0 0 0 ? S mrt20 0:00 [kvm-pit/4044093]
root 4057888 0.0 0.0 79184 2176 ? Ssl 00:00 0:08 /usr/sbin/pvefw-logger
www-data 4057892 0.0 0.9 81056 54180 ? S 00:00 0:02 spiceproxy worker
root 4112362 0.0 0.0 0 0 ? I 06:08 0:03 [kworker/0:2-rcu_gp]
 
This sounds weird and unexpected. Can you please post the configs of these VMs? qm config {vmid}. Please post it inside [CODE][/CODE] tags for better readabilty.
ID 110 has a problem (the backup is wrong, console goes to 102, which should not be running)
ID 111 has a problem ( console goes to name of id 110)
How do you determine that the console is accessing the wrong VM?
What do you mean with "the backup is wrong"? Too large for what you expect?
 
well I have solved the problems by:

1- rebooting
2- copying disks of problem VM's to another storage, so double usage of a disk by two ID's is not possibble.
3- deleting the problem vm's after making sure they did not use a disk of another vm.
4- making sure all vm's have qemu agent so react to gui restarts in the right way.

If anybody has a clue how this mess could have happened, do let me know
 
This sounds weird and unexpected. Can you please post the configs of these VMs? qm config {vmid}. Please post it inside [CODE][/CODE] tags for better readabilty.

How do you determine that the console is accessing the wrong VM?
What do you mean with "the backup is wrong"? Too large for what you expect?
I cannot I am sorry, I have now deleted the problem vm's.

My only explanation is user error on my side, where I cloned the wrong id's because too tired perhaps.
Excuses.

If it happens again I will make sure to save the configs.
 
  • Like
Reactions: aaron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!