Last edited:
pveversion -v
qm config <VMID>
Apr 02 18:13:07 pve kernel: ata1.00: exception Emask 0x0 SAct 0x10 SErr 0x0 action 0x0
Apr 02 18:13:07 pve kernel: ata1.00: irq_stat 0x40000008
Apr 02 18:13:07 pve kernel: ata1.00: failed command: READ FPDMA QUEUED
Apr 02 18:13:07 pve kernel: ata1.00: cmd 60/08:20:20:9c:bf/00:00:00:00:00/40 tag 4 ncq dma 4096 in
res 41/40:20:20:9c:bf/00:00:00:00:00/a0 Emask 0x409 (media error) <F>
pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
qm config 210
bootdisk: sata0
cores: 2
description:
ide2: nas:iso/CentOS-7-x86_64-Minimal-1708.iso,media=cdrom
memory: 12288
name: vmM1
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0,tag=123
numa: 0
ostype: l26
sata0: local:210/vm-210-disk-1.qcow2,size=200G
scsihw: virtio-scsi-pci
smbios1: uuid=b695c51d-3860-41b0-85be-241decba1f15
sockets: 1
qm config 212
bootdisk: sata0
cores: 2
description:
ide2: nas:iso/CentOS-7-x86_64-Minimal-1708.iso,media=cdrom
memory: 4096
name: vmM2
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0,tag=123
numa: 0
ostype: l26
sata0: local:212/vm-212-disk-1.qcow2,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=5b5a1085-94ba-4935-b78f-747ef402af70
sockets: 1
drwxr-xr-x 2 root root 4096 Oct 27 19:39 210
drwxr-xr-x 2 root root 4096 Oct 30 10:11 212
qemu-img info 210/vm-210-disk-1.qcow2
image: 210/vm-210-disk-1.qcow2
file format: qcow2
virtual size: 200G (214748364800 bytes)
disk size: 104G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
qemu-img info 212/vm-212-disk-1.qcow2
image: 212/vm-212-disk-1.qcow2
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 3.3G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Apr 3 21:09:32 pve pvedaemon[47006]: starting vnc proxy UPIDve:0000B79E:540B06A2:5E8789EC:vncproxy:210:root@pam:
Apr 3 21:09:32 pve pvedaemon[29618]: <root@pam> starting task UPIDve:0000B79E:540B06A2:5E8789EC:vncproxy:210:root@pam:
Apr 3 21:09:38 pve pvestatd[1313]: status update time (6.205 seconds)
Apr 3 21:09:49 pve pvestatd[1313]: status update time (6.213 seconds)
Apr 3 21:09:58 pve pvestatd[1313]: status update time (6.217 seconds)
Apr 3 21:10:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 3 21:10:00 pve systemd[1]: pvesr.service: Succeeded.
Apr 3 21:10:00 pve systemd[1]: Started Proxmox VE replication runner.
Apr 3 21:10:06 pve qm[46914]: VM 210 qmp command failed - VM 210 qmp command 'change' failed - got timeout
Apr 3 21:10:06 pve pvedaemon[46912]: Failed to run vncproxy.
Apr 3 21:10:06 pve pvedaemon[43244]: <root@pam> end task UPIDve:0000B740:540AFBD6:5E8789D1:vncproxy:210:root@pam: Failed to run vncproxy.
Apr 3 21:10:08 pve pvestatd[1313]: status update time (6.202 seconds)
Apr 3 21:10:28 pve pvestatd[1313]: status update time (6.215 seconds)
Apr 3 21:10:39 pve pvestatd[1313]: status update time (6.275 seconds)
Apr 3 21:10:48 pve pvestatd[1313]: status update time (6.192 seconds)
Apr 3 21:10:58 pve pvestatd[1313]: status update time (6.285 seconds)
Apr 3 21:11:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 3 21:11:01 pve systemd[1]: pvesr.service: Succeeded.
Apr 3 21:11:01 pve systemd[1]: Started Proxmox VE replication runner.
Apr 3 21:11:06 pve qm[47021]: VM 210 qmp command failed - VM 210 qmp command 'change' failed - got timeout
Apr 3 21:11:06 pve pvedaemon[47006]: Failed to run vncproxy.
Apr 3 21:11:06 pve pvedaemon[29618]: <root@pam> end task UPIDve:0000B79E:540B06A2:5E8789EC:vncproxy:210:root@pam: Failed to run vncproxy.
Apr 3 21:11:08 pve pvestatd[1313]: status update time (6.186 seconds)
Apr 3 21:12:33 pve pvedaemon[47458]: starting vnc proxy UPIDve:0000B962:540B4D1F:5E878AA1:vncproxy:212:root@pam:
Apr 3 21:12:33 pve pvedaemon[46132]: <root@pam> starting task UPIDve:0000B962:540B4D1F:5E878AA1:vncproxy:212:root@pam:
Thank you share But my problem is that SSD actually have bad logic I'm going to reinstall it when I Secure Erase SSDHello,
I have the same problem.
I have pve 6.0-4 and two machines.
- both Centos7 1708 systems,
- there is no agent on both machines,
- I checked qemu-img info - identical values,
- file permissions - also identical,
- test by two browsers, also incognito
- I had my SSL certificate, but according to your instructions - I deleted it and generated the system pve again,
- lvm - old layout
machine with problem:
machine without problem: