CT 104 using root disk vm-103

xtura

New Member
Apr 27, 2024
2
0
1
Hi there,

brand new proxmox user here. I've been testing proxmox 8.1.5 on ovh-template for a week, no previous experience. congratulations for a very good work to all developers and people involved.

yesterday, i've been working in a mail server with 3 ctx. just for testing different auth mechanisms. was working great. suddendly my postfix ct does not show any file in my /etc/postfix directory. I was shocked. a few minutes later i realized that my postfix ct-104 was using root disk vm-103 of my other ct-103 mysql. At first I was convinced that i made something stupid, but in fact i really don't know at all how the hell I did to produce this kind of disaster, so accept in advance my apologies if this is in fact my fault. for the first time I entered the shell of proxmox and I realized the following:

grep -R "filesystem"
pve/tasks/D/UPID:snip:vzrestore:100:root@pam::Creating filesystem with 16777216 4k blocks and 4194304 inodes
pve/tasks/E/UPID:snip::vzcreate:106:root@pam::Creating filesystem with 4194304 4k blocks and 1048576 inodes
pve/tasks/9/UPID::snip:vzcreate:102:root@pam::Creating filesystem with 7864320 4k blocks and 1966080 inodes
pve/tasks/9/UPID::snip::vzcreate:103:root@pam::Creating filesystem with 2097152 4k blocks and 524288 inodes
pve/tasks/A/UPID::snip::vzcreate:107:root@pam::Creating filesystem with 1310720 4k blocks and 327680 inodes
pve/tasks/A/UPID::snip::vzrestore:101:root@pam::Creating filesystem with 8388608 4k blocks and 2097152 inodes
pve/tasks/C/UPID::snip::vzcreate:103:root@pam::Creating filesystem with 2097152 4k blocks and 524288 inodes
pve/tasks/C/UPID::snip::vzcreate:103:root@pam::Creating filesystem with 4194304 4k blocks and 1048576 inodes
pve/tasks/C/UPID::snip::vzcreate:103:root@pam::Creating filesystem with 2097152 4k blocks and 524288 inodes

apart from the 2 restores looks like vm-104 was never created. and yes, i removed a ct or two, can rembember now but i understand that it's possible that 103 was created more than one time......or not?

/var/lib/vz/images# ls -la
drwxr----- 2 root root 4096 Apr 16 17:58 100
drwxr----- 2 root root 4096 Apr 16 18:12 101
drwxr----- 2 root root 4096 Apr 18 15:56 102
drwxr----- 2 root root 4096 Apr 22 15:51 103
drwxr----- 2 root root 4096 Apr 21 13:21 105
drwxr----- 2 root root 4096 Apr 22 18:56 106
drwxr----- 2 root root 4096 Apr 26 10:21 107

no 104

/etc/pve/nodes/xxxx/lxc# ls -la
-rw-r----- 1 root www-data 411 Apr 26 23:44 100.conf
-rw-r----- 1 root www-data 410 Apr 26 23:43 101.conf
-rw-r----- 1 root www-data 404 Apr 26 23:44 102.conf
-rw-r----- 1 root www-data 306 Apr 27 15:56 103.conf
-rw-r----- 1 root www-data 291 Apr 27 15:55 104.conf
-rw-r----- 1 root www-data 307 Apr 26 23:43 106.conf
-rw-r----- 1 root www-data 305 Apr 26 10:21 107.conf

104.conf there. but....

/etc/pve/nodes/xxxx/lxc# cat 104.conf
arch: amd64
cores: 1
features: nesting=1
hostname: postfix
memory: 1024
nameserver: 1.1.1.1
net0: name=eth0,bridge=vmbr1,firewall=1,hwaddr=xx:xx:xx:xx:xx:xx,ip=10.xx.xx.x7/24,type=veth
ostype: alpine
rootfs: local:103/vm-103-disk-0.raw,size=8G
searchdomain: 1.1.1.1
swap: 256
unprivileged: 1

and

/etc/pve/nodes/xxxx/lxc# cat 103.conf
arch: amd64
cores: 2
features: nesting=1
hostname: mysql
memory: 16384
nameserver: 1.1.1.1
net0: name=eth0,bridge=vmbr1,firewall=1,gw=10.xx.xx.xx,hwaddr=xx:xx:xx:xx:xx:xx,ip=10.xx.xx.x6/24,type=veth
ostype: alpine
rootfs: local:103/vm-103-disk-0.raw,size=16G
searchdomain: 1.1.1.1
swap: 2048
unprivileged: 1

notice the two ct's are using the same disk but wait! size is different!!

nothing really important has been lost, but as long as i'm not aware of being done nothing that can produce this mess, It would be reassuring that someone can shed some light and explain me how can i reproduce this behaviour without using the shell to avoid future disasters.

thank you!

xtura
 
I'm guessing you've messed up something with your cluster.

The reason I say this: you keep showing output from /etc/pve/nodes/xxxx/lxc, this is cluster/nodes specific info.

In fact you should be showing output from /etc/pve/lxc/CTID.conf, which is the correct location for LXC conf files. So for example for LXC 104 show output of cat /etc/pve/lxc/104.conf

No I haven't worked out what's gone wrong - but your situation shouldn't be happening. Almost certainly its a product of a wrong configuration of some sort. You don't really provide enough info as to what you actually did. What did you do in GUI & what does your GUI show for your configuration.
 
Hi,
maybe the configuration file was copied/edited by hand some time? Please check your shell's logs. The size recorded in the configuration is only informational, whenever the actual size is needed for an operation, e.g. resize/disk migration, it is queried (and only then updated in the configuration).

I'm guessing you've messed up something with your cluster.

The reason I say this: you keep showing output from /etc/pve/nodes/xxxx/lxc, this is cluster/nodes specific info.

In fact you should be showing output from /etc/pve/lxc/CTID.conf, which is the correct location for LXC conf files. So for example for LXC 104 show output of cat /etc/pve/lxc/104.conf
Please note that /etc/pve/lxc is just a link to the directory for the current node, i.e. /etc/pve/nodes/xxxx/lxc.
 
Please note that /etc/pve/lxc is just a link to the directory for the current node, i.e. /etc/pve/nodes/xxxx/lxc.
Yes, I was aware of that, I just found it interesting that OP always used that to show config's, when most users would show it in /etc/pve/lxc
 
Yes, I was aware of that, I just found it interesting that OP always used that to show config's, when most users would show it in /etc/pve/lxc
thank you all for reading me and........guess ;-)

that's a sign of my "neophitism" regarding proxmox insides.

the last 3 weeks were intensive, so i don't discard at all that I've did some manual mess and I simply can remeber even accessing the shell. my post was intended to share the evidence to understand if this is something that can be achieved using the GUI exclusively, to avoid some non intuitive known procedures.

and yes, definetely it should be me, because if not, some paranormal thing is happennig :eek:

xtura
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!