Hello,
I restored two VM in a new Cluster. After boot up all operations are very slow.
I copied the backup from 4.4 Proxmox to 5.3 cluster
First step was to create a VM with same ID: and same Boot disk size
next was to restore from Backup to this VM
After boot the VM is very slow
old .conf
balloon: 1
bootdisk: ide0
cores: 4
ide0: ceph_disks_01:vm-114-disk-1,size=100G
ide2: none,media=cdrom
memory: 16384
name: isp31deb8601
net0: e1000=02:13:7A:42:3A:09,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=70d13183-68af-46e1-b586-93d4810a93c2
sockets: 2
new .conf
agent: 1,fstrim_cloned_disks=1
balloon: 1
bootdisk: ide0
cores: 4
ide0: Disks:vm-114-disk-2,cache=writethrough,size=100G,ssd=1
ide2: none,media=cdrom
memory: 16384
name: isp31deb8601
net0: virtio=02:13:7A:42:3A:09,bridge=vmbr0,tag=100
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=70d13183-68af-46e1-b586-93d4810a93c2
sockets: 2
Old Node Proxmox - Virtual Environment 4.4-5/c43015a5
CPUs
32 x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (2 Sockets)
Kernel Version
Linux 4.4.35-1-pve #1 SMP Thu Dec 22 14:58:39 CET 2016
New Node Proxmox 5.3 -Virtual Environment 5.3-8
CPU(s)
48 x Intel(R) Xeon(R) CPU E5-2651 v2 @ 1.80GHz (2 Sockets)
Kernelversion
Linux 4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100)
PVE Manager Version
pve-manager/5.3-8/2929af8e
Old Proxmox is one Node with Ceph 12 OSD
New Proxmox is a 12 Node Cluster with 36 OSD
10GB SPF+ for cluster 10GB SPF+ for Ceph
All VM´s directly build on the nodes are running with good performance.
only the restored from 4.4 backups have poor speed.
here a very slow hdparm test from this restored VM:
Timing cached reads: 2 MB in 2.75 seconds = 746.00 kB/sec
Timing buffered disk reads: 2 MB in 4.86 seconds = 421.00 kB/sec
other VM
Timing cached reads: 10000 MB in 2.75 seconds = 5003.60 MB/sec
Timing buffered disk reads: 2 MB in 4.86 seconds = 6.77 MB/sec
I´ve no idea why this disk speed is poor.
I restored two VM in a new Cluster. After boot up all operations are very slow.
I copied the backup from 4.4 Proxmox to 5.3 cluster
First step was to create a VM with same ID: and same Boot disk size
next was to restore from Backup to this VM
After boot the VM is very slow
old .conf
balloon: 1
bootdisk: ide0
cores: 4
ide0: ceph_disks_01:vm-114-disk-1,size=100G
ide2: none,media=cdrom
memory: 16384
name: isp31deb8601
net0: e1000=02:13:7A:42:3A:09,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=70d13183-68af-46e1-b586-93d4810a93c2
sockets: 2
new .conf
agent: 1,fstrim_cloned_disks=1
balloon: 1
bootdisk: ide0
cores: 4
ide0: Disks:vm-114-disk-2,cache=writethrough,size=100G,ssd=1
ide2: none,media=cdrom
memory: 16384
name: isp31deb8601
net0: virtio=02:13:7A:42:3A:09,bridge=vmbr0,tag=100
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=70d13183-68af-46e1-b586-93d4810a93c2
sockets: 2
Old Node Proxmox - Virtual Environment 4.4-5/c43015a5
CPUs
32 x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (2 Sockets)
Kernel Version
Linux 4.4.35-1-pve #1 SMP Thu Dec 22 14:58:39 CET 2016
New Node Proxmox 5.3 -Virtual Environment 5.3-8
CPU(s)
48 x Intel(R) Xeon(R) CPU E5-2651 v2 @ 1.80GHz (2 Sockets)
Kernelversion
Linux 4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100)
PVE Manager Version
pve-manager/5.3-8/2929af8e
Old Proxmox is one Node with Ceph 12 OSD
New Proxmox is a 12 Node Cluster with 36 OSD
10GB SPF+ for cluster 10GB SPF+ for Ceph
All VM´s directly build on the nodes are running with good performance.
only the restored from 4.4 backups have poor speed.
here a very slow hdparm test from this restored VM:
Timing cached reads: 2 MB in 2.75 seconds = 746.00 kB/sec
Timing buffered disk reads: 2 MB in 4.86 seconds = 421.00 kB/sec
other VM
Timing cached reads: 10000 MB in 2.75 seconds = 5003.60 MB/sec
Timing buffered disk reads: 2 MB in 4.86 seconds = 6.77 MB/sec
I´ve no idea why this disk speed is poor.