Hello to all!
I would kindly ask for advice.
I migrated via binary copy my old 120 GB SSD (with a running proxmox installation) to a new 1TB SSD.
Every thing worked fine.
After the copy I started from my new disk, all VMs are running. As expected, the partitions remain as they are.
Currently it looks like this:
root@srv0001:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part /boot/efi
└─sda3 8:3 0 111.6G 0 part
├─pve-root 251:0 0 27.8G 0 lvm /
├─pve-swap 251:1 0 7G 0 lvm [SWAP]
├─pve-data_tmeta 251:2 0 64M 0 lvm
│ └─pve-data-tpool 251:4 0 62.9G 0 lvm
│ ├─pve-data 251:5 0 62.9G 0 lvm
│ ├─pve-vm--301--disk--1 251:6 0 8G 0 lvm
│ ├─pve-vm--401--disk--1 251:7 0 32G 0 lvm
│ ├─pve-vm--601--disk--1 251:8 0 16G 0 lvm
│ ├─pve-vm--401--state--OracleVM_V1 251:9 0 8.5G 0 lvm
│ ├─pve-vm--601--state--Stable_01 251:10 0 4.5G 0 lvm
│ ├─pve-vm--401--state--Stable_01 251:11 0 8.5G 0 lvm
│ └─pve-vm--601--state--unstabil 251:12 0 4.5G 0 lvm
└─pve-data_tdata 251:3 0 62.9G 0 lvm
└─pve-data-tpool 251:4 0 62.9G 0 lvm
├─pve-data 251:5 0 62.9G 0 lvm
├─pve-vm--301--disk--1 251:6 0 8G 0 lvm
├─pve-vm--401--disk--1 251:7 0 32G 0 lvm
├─pve-vm--601--disk--1 251:8 0 16G 0 lvm
├─pve-vm--401--state--OracleVM_V1 251:9 0 8.5G 0 lvm
├─pve-vm--601--state--Stable_01 251:10 0 4.5G 0 lvm
├─pve-vm--401--state--Stable_01 251:11 0 8.5G 0 lvm
└─pve-vm--601--state--unstabil 251:12 0 4.5G 0 lvm
I am bloody new to the Linux OS environments and I dont want to kill my system by changing partitions.
Please advice me by warning for pitfalls and telling me the preferred commands (+switches) to use.
I want to use the additional space only for new VMs and creating backups. I guess i will have to increase my sda3 partition.
* Will the new space automatically be available after changing the partition?
* Do I have to power down all VMs in the meanwhile?
I am using Proxmox 4.4 on Linux 8 Debian Jessie (Kernel 4.4.35-1-pve)
BR Michael
I would kindly ask for advice.
I migrated via binary copy my old 120 GB SSD (with a running proxmox installation) to a new 1TB SSD.
Every thing worked fine.
After the copy I started from my new disk, all VMs are running. As expected, the partitions remain as they are.
Currently it looks like this:
root@srv0001:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part /boot/efi
└─sda3 8:3 0 111.6G 0 part
├─pve-root 251:0 0 27.8G 0 lvm /
├─pve-swap 251:1 0 7G 0 lvm [SWAP]
├─pve-data_tmeta 251:2 0 64M 0 lvm
│ └─pve-data-tpool 251:4 0 62.9G 0 lvm
│ ├─pve-data 251:5 0 62.9G 0 lvm
│ ├─pve-vm--301--disk--1 251:6 0 8G 0 lvm
│ ├─pve-vm--401--disk--1 251:7 0 32G 0 lvm
│ ├─pve-vm--601--disk--1 251:8 0 16G 0 lvm
│ ├─pve-vm--401--state--OracleVM_V1 251:9 0 8.5G 0 lvm
│ ├─pve-vm--601--state--Stable_01 251:10 0 4.5G 0 lvm
│ ├─pve-vm--401--state--Stable_01 251:11 0 8.5G 0 lvm
│ └─pve-vm--601--state--unstabil 251:12 0 4.5G 0 lvm
└─pve-data_tdata 251:3 0 62.9G 0 lvm
└─pve-data-tpool 251:4 0 62.9G 0 lvm
├─pve-data 251:5 0 62.9G 0 lvm
├─pve-vm--301--disk--1 251:6 0 8G 0 lvm
├─pve-vm--401--disk--1 251:7 0 32G 0 lvm
├─pve-vm--601--disk--1 251:8 0 16G 0 lvm
├─pve-vm--401--state--OracleVM_V1 251:9 0 8.5G 0 lvm
├─pve-vm--601--state--Stable_01 251:10 0 4.5G 0 lvm
├─pve-vm--401--state--Stable_01 251:11 0 8.5G 0 lvm
└─pve-vm--601--state--unstabil 251:12 0 4.5G 0 lvm
I am bloody new to the Linux OS environments and I dont want to kill my system by changing partitions.
Please advice me by warning for pitfalls and telling me the preferred commands (+switches) to use.
I want to use the additional space only for new VMs and creating backups. I guess i will have to increase my sda3 partition.
* Will the new space automatically be available after changing the partition?
* Do I have to power down all VMs in the meanwhile?
I am using Proxmox 4.4 on Linux 8 Debian Jessie (Kernel 4.4.35-1-pve)
BR Michael