lost page write due to I/O error on vdd

pgro

Member
Oct 20, 2022
56
2
13
Hi Proxmoxers,

One of my proxmox box got strange behaviour under a Normal installation with ZFS.

On my Debian Guest VM I Got an "I/O error" after one of our IT Member tried to backup the VM.

Code:
Nov 20 07:36:07 srv kernel: [1691424.098978] lost page write due to I/O error on vdd
Nov 20 07:36:12 srv postgres[24289]: [2-1] ERROR:  could not open relation 1664/0/1262: Read-only file system
Nov 20 07:36:12 srv postgres[3952]: [1-1] LOG:  could not open temporary statistics file "global/pgstat.tmp": Read-only file system
Nov 20 07:36:22 srv postgres[24290]: [2-1] ERROR:  could not open relation 1664/0/1262: Read-only file system
Nov 20 07:36:22 srv postgres[3952]: [2-1] LOG:  could not open temporary statistics file "global/pgstat.tmp": Read-only file system
Nov 20 07:36:32 srv postgres[24291]: [2-1] ERROR:  could not open relation 1664/0/1262: Read-only file system
Nov 20 07:36:33 srv postgres[3952]: [3-1] LOG:  could not open temporary statistics file "global/pgstat.tmp": Read-only file system
Nov 20 07:36:42 srv postgres[24292]: [2-1] ERROR:  could not open relation 1664/0/1262: Read-only file system
Nov 20 07:36:42 srv postgres[3952]: [4-1] LOG:  could not open temporary statistics file "global/pgstat.tmp": Read-only file system
Nov 20 07:36:52 srv postgres[24293]: [2-1] ERROR:  could not open relation 1664/0/1262: Read-only file system
Nov 20 07:36:53 srv postgres[3952]: [5-1] LOG:  could not open temporary statistics file "global/pgstat.tmp": Read-only file system
Nov 20 07:37:02 srv postgres[24307]: [2-1] ERROR:  could not open relation 1664/0/1262: Read-only file system
Nov 20 07:37:02 srv postgres[24308]: [2-1] ERROR:  could not open relation 1663/366217805/2601: Read-only file system

Now on further detail, the VM was working just fine and without any previous error on it's syslog kern.log etc..
A user triggered a Backup from Gui by choosing a typical (default backup from within the VM)

1700507128274.png

The backup started and then after a couple of minutes it was cancled by the user

Code:
INFO: starting new backup job: vzdump 101 --notes-template '{{guestname}}' --storage local --mode snapshot --node pve-XXXXXX --compress zstd --remove 0
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2023-11-20 07:08:04
INFO: status = running
INFO: VM Name: XXX
INFO: include disk 'virtio0' 'local-zfs:vm-101-disk-0' 32G
INFO: include disk 'virtio1' 'local-zfs:vm-101-disk-1' 17G
INFO: include disk 'virtio2' 'local-zfs:vm-101-disk-2' 300G
INFO: include disk 'virtio3' 'local-zfs:vm-101-disk-3' 501G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: snapshots found (not included into backup)
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-qemu-101-2023_11_20-07_08_04.XXX.zst'
INFO: started backup task '50c6daa6-7782-4f10-a802-a93142d90e22'
INFO: resuming VM again
INFO:   0% (392.1 MiB of 850.0 GiB) in 3s, read: 130.7 MiB/s, write: 125.4 MiB/s
INFO:   1% (8.5 GiB of 850.0 GiB) in 1m 24s, read: 102.8 MiB/s, write: 100.9 MiB/s
INFO:   2% (17.0 GiB of 850.0 GiB) in 2m 39s, read: 116.0 MiB/s, write: 114.0 MiB/s
INFO:   3% (25.5 GiB of 850.0 GiB) in 4m 12s, read: 93.7 MiB/s, write: 92.2 MiB/s
INFO:   4% (34.0 GiB of 850.0 GiB) in 5m 32s, read: 109.0 MiB/s, write: 107.3 MiB/s
INFO:   5% (42.6 GiB of 850.0 GiB) in 6m 57s, read: 103.0 MiB/s, write: 101.4 MiB/s
INFO:   6% (51.0 GiB of 850.0 GiB) in 8m 15s, read: 110.7 MiB/s, write: 108.9 MiB/s
INFO:   7% (59.6 GiB of 850.0 GiB) in 9m 31s, read: 115.3 MiB/s, write: 113.5 MiB/s
INFO:   8% (68.1 GiB of 850.0 GiB) in 10m 49s, read: 111.4 MiB/s, write: 109.6 MiB/s
INFO:   9% (76.6 GiB of 850.0 GiB) in 12m 20s, read: 96.0 MiB/s, write: 94.5 MiB/s
INFO:  10% (85.1 GiB of 850.0 GiB) in 13m 45s, read: 102.3 MiB/s, write: 100.8 MiB/s
INFO:  11% (93.5 GiB of 850.0 GiB) in 15m 23s, read: 88.3 MiB/s, write: 86.9 MiB/s
INFO:  12% (102.0 GiB of 850.0 GiB) in 17m 3s, read: 86.8 MiB/s, write: 85.4 MiB/s
INFO:  13% (110.6 GiB of 850.0 GiB) in 18m 42s, read: 88.3 MiB/s, write: 86.9 MiB/s
INFO:  14% (119.0 GiB of 850.0 GiB) in 20m 10s, read: 98.8 MiB/s, write: 97.2 MiB/s
INFO:  15% (127.5 GiB of 850.0 GiB) in 21m 52s, read: 85.0 MiB/s, write: 83.6 MiB/s
INFO:  16% (136.1 GiB of 850.0 GiB) in 24m 17s, read: 60.4 MiB/s, write: 59.5 MiB/s
INFO:  17% (144.5 GiB of 850.0 GiB) in 26m 26s, read: 67.2 MiB/s, write: 66.2 MiB/s
INFO:  18% (153.1 GiB of 850.0 GiB) in 28m 15s, read: 80.2 MiB/s, write: 79.0 MiB/s
ERROR: interrupted by signal
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 101 failed - interrupted by signal
INFO: Failed at 2023-11-20 07:36:36
ERROR: Backup job failed - interrupted by signal
TASK ERROR: interrupted by signal

That moment of cancellation the Guest VM got triggered with I/O Issue already posted above.

Any idea what got wrong?
 
Looks like QEMU could not save the write action from inside the VM (and the OS inside decided to handle this by making the filesystem read-only) while a backup was going on. Can you check journalctl on the Proxmox host for possible clues around that time? Maybe the host was out of memory or its a bug? What Proxmox version, VM configuration, VM storage, etc.?
 
Last edited:
I wasn't able to trace anything on journalctl but I noticed the below that was running long before the issue.

pvedaemon[3989330]: writing cluster log failed: ipcc_send_rec[7] failed: Invalid argument

But somehow that one stop after 19:00. I don't know if this is relevant or not. Memory seems ok
Code:
               total        used        free      shared  buff/cache   available
Mem:            31Gi        23Gi       6.9Gi        52Mi       843Mi       7.1Gi
Swap:           41Gi        11Mi        41Gi

This vm is only using 4GB of ram so I am expecting ZFS to occupy the rest.

Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.104-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.7.0
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
pve-zsync: 2.2.3
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1


Code:
agent: 0
autostart: 0
boot: order=virtio0
cores: 4
hotplug: disk,network,usb,cpu
kvm: 1
localtime: 1
machine: q35
memory: 4096
meta: creation-qemu=6.2.0,ctime=1665554950
name: VMA
net0: virtio=92:D3:A6:80:20:A3,bridge=vmbr0
net1: virtio=68:05:CA:30:AD:A8,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
parent: auto_hourly_211123090003
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=52aa6351-ff45-4658-9615-15d55a696392
sockets: 1
tablet: 0
virtio0: local-zfs:vm-101-disk-0,discard=on,iothread=1,size=32G
virtio1: local-zfs:vm-101-disk-1,discard=on,iothread=1,size=17G
virtio2: local-zfs:vm-101-disk-2,discard=on,iothread=1,size=300G
virtio3: local-zfs:vm-101-disk-3,discard=on,iothread=1,size=501G
vmgenid: e9004db9-78de-40d7-b632-7929cb044ea3

Is there any way to trace what caused this?

Question: Would that happen if my perc h330 disks were configured with cache enabled?
 
Is the controller in HBA mode? Then the cache itself should no longer have any effect. But if you did RAID 0 for each disk, then that would be the error.

Have you checked whether and which file system is still in read-only? Maybe there is another cause for this and the incorrect backup is just a symptom?
 
My controller is on HBA mode but disks cache (not controller cache) is enabled. I search to all possible entries but I wasn't able to trace anything at all. Maybe qemu troubleshooting can provide further details? Now the system (VM Guest) is running just fine after reboot, but still I can't explain why this happened.
 
My controller is on HBA mode but disks cache (not controller cache) is enabled.
That's okay and should stay that way, otherwise you'll lose a lot of performance.
Now the system (VM Guest) is running just fine after reboot, but still I can't explain why this happened.
So you didn't look at which FS went into read-only before? Of course, that doesn't make debugging any easier for us here if you're destroying clues ;-)
 
I looked it
My /dev/vdd drive as ext3 went to read only. Actually I have a per hour snapshot and I can revert back in time with the error and read only mode. So what kind of info would like to gather ?
 
It would be interesting to know if there are messages from the node and the VM that correlate. So that it is possible to reconstruct possible connections.

In particular, there would also be the question of whether the read-only state of the VM existed before, occurred during the backup or only arose due to the abort. The other question would also be, can you recreate it? So back up the VM completely and see if it works and then try again and cancel it.

But what I also just noticed, you don't have a guest agent enabled in the VM config. I would recommend you to set up the Guest Agent, especially for backups, it gives the VM the command to flush the caches etc. and also tells the VM that the FS is read-only for a short time or is blocked.
 
It would be interesting to know if there are messages from the node and the VM that correlate. So that it is possible to reconstruct possible connections.

No there are no error indicator discovered with journalctl or within /var/logs , nothing, I also perform a scrub on the zfs pool and still nothing.
In particular, there would also be the question of whether the read-only state of the VM existed before, occurred during the backup or only arose due to the abort.
That was a first time backup in snapshot mode from gui, never indicated before. No, the partical vm never again entered to read-only state EXCEPT this time with Backup, and as per timing notice it was happened during cancelation period. That's wonders me what's actually happening during cancelation ? What is the sequence that Proxmox is following after processing wih cancel action?

The other question would also be, can you recreate it? So back up the VM completely and see if it works and then try again and cancel it.
I did but no luck :( i was expected to catch this kind of read-only mode but it dind't appeared again.
But what I also just noticed, you don't have a guest agent enabled in the VM config. I would recommend you to set up the Guest Agent, especially for backups, it gives the VM the command to flush the caches etc. and also tells the VM that the FS is read-only for a short time or is blocked.
Ok that would also be a perfect solution but I wasn't able to install the agent due to old kernel and debian version. (No can't be upgraded at the moment).

Below is the info (don't freak out)

Code:
srv:~# uname -a
Linux srv 3.6.6 #3 SMP Mon Mar 27 10:29:49 GMT 2017 i686 GNU/Linux
srv:~# cat /etc/debian_version
5.0.7

Maybe someone can offer a solution here?

Thank you
 
i was expected to catch this kind of read-only mode but it dind't appeared again.
Then maybe it was just a stupid coincidence? Now that you know this, you can perhaps also control that manual backups are not allowed to be aborted.

Oh dear, that's old. :eek:
Maybe someone can offer a solution here?
apt-get install qemu-guest-agent does not work?

Otherwise you might be able to install one of the packages from here (http://archive.debian.org/debian/pool/main/q/qemu/):
Code:
[ ]    qemu-guest-agent_2.0.0+dfsg-4~bpo70+1_s390x.deb    2014-05-05 22:15     155K
[ ]    qemu-guest-agent_2.1+dfsg-5~bpo70+1_s390.deb    2014-10-02 15:42     147K
[ ]    qemu-guest-agent_2.1+dfsg-5~bpo70+1_sparc.deb    2014-10-02 19:30     133K
[ ]    qemu-guest-agent_2.1+dfsg-11_kfreebsd-amd64.deb    2014-12-09 22:44     124K
[ ]    qemu-guest-agent_2.1+dfsg-11_kfreebsd-i386.deb    2014-12-10 01:55     131K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_amd64.deb    2016-02-14 07:55     162K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_armel.deb    2016-02-22 18:16     140K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_armhf.deb    2016-02-22 22:47     136K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_i386.deb    2016-02-22 17:14     171K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_ia64.deb    2016-02-22 19:46     180K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_kfreebsd-amd64.deb    2016-02-22 17:14     151K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_kfreebsd-i386.deb    2016-02-22 17:46     154K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_mips.deb    2016-02-28 14:49     133K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_mipsel.deb    2016-02-22 20:17     133K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_powerpc.deb    2016-02-22 17:14     142K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_amd64.deb    2016-05-09 20:37     133K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_arm64.deb    2016-05-09 20:32     117K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_armel.deb    2016-05-09 20:32     114K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_armhf.deb    2016-05-09 20:37     115K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_i386.deb    2016-05-09 20:32     145K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_mips.deb    2016-05-09 20:37     119K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_mipsel.deb    2016-05-09 20:37     121K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_powerpc.deb    2016-05-09 20:32     116K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_ppc64el.deb    2016-05-09 20:37     119K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_s390x.deb    2016-05-09 20:37     129K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_amd64.deb    2017-03-06 15:40     307K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_arm64.deb    2017-03-06 15:55     267K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_armel.deb    2017-03-06 17:40     264K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_armhf.deb    2017-03-06 17:25     266K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_i386.deb    2017-03-06 15:40     315K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_mips.deb    2017-03-06 20:56     239K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_mipsel.deb    2017-03-06 20:11     242K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_powerpc.deb    2017-03-06 15:45     265K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_ppc64el.deb    2017-03-06 15:55     270K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_s390x.deb    2017-03-06 15:55     285K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_amd64.deb    2020-01-31 21:00     308K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_arm64.deb    2020-01-31 21:45     283K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_armel.deb    2020-01-31 23:15     280K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_armhf.deb    2020-01-31 23:00     282K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_i386.deb    2020-01-31 20:55     315K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_mips.deb    2020-01-31 23:30     260K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_mips64el.deb    2020-01-31 21:10     259K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_mipsel.deb    2020-01-31 23:31     261K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_ppc64el.deb    2020-01-31 20:45     290K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_s390x.deb    2020-01-31 20:50     304K
 
Then maybe it was just a stupid coincidence? Now that you know this, you can perhaps also control that manual backups are not allowed to be aborted.
Yeap, still I wan't to discover more further what can I do to troubleshoot further deeper in order to analyse and understand what caused that IO error so that next time be able to trace the root cause.

Oh dear, that's old. :eek:

apt-get install qemu-guest-agent does not work?
Defenetely not, I was also tried to install legacy packages but I had problem with lot of dependencies. :( Maybe a compile from source could help?

Otherwise you might be able to install one of the packages from here (http://archive.debian.org/debian/pool/main/q/qemu/):
Code:
[ ]    qemu-guest-agent_2.0.0+dfsg-4~bpo70+1_s390x.deb    2014-05-05 22:15     155K
[ ]    qemu-guest-agent_2.1+dfsg-5~bpo70+1_s390.deb    2014-10-02 15:42     147K
[ ]    qemu-guest-agent_2.1+dfsg-5~bpo70+1_sparc.deb    2014-10-02 19:30     133K
[ ]    qemu-guest-agent_2.1+dfsg-11_kfreebsd-amd64.deb    2014-12-09 22:44     124K
[ ]    qemu-guest-agent_2.1+dfsg-11_kfreebsd-i386.deb    2014-12-10 01:55     131K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_amd64.deb    2016-02-14 07:55     162K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_armel.deb    2016-02-22 18:16     140K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_armhf.deb    2016-02-22 22:47     136K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_i386.deb    2016-02-22 17:14     171K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_ia64.deb    2016-02-22 19:46     180K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_kfreebsd-amd64.deb    2016-02-22 17:14     151K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_kfreebsd-i386.deb    2016-02-22 17:46     154K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_mips.deb    2016-02-28 14:49     133K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_mipsel.deb    2016-02-22 20:17     133K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u5a~bpo70+1_powerpc.deb    2016-02-22 17:14     142K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_amd64.deb    2016-05-09 20:37     133K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_arm64.deb    2016-05-09 20:32     117K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_armel.deb    2016-05-09 20:32     114K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_armhf.deb    2016-05-09 20:37     115K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_i386.deb    2016-05-09 20:32     145K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_mips.deb    2016-05-09 20:37     119K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_mipsel.deb    2016-05-09 20:37     121K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_powerpc.deb    2016-05-09 20:32     116K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_ppc64el.deb    2016-05-09 20:37     119K
[ ]    qemu-guest-agent_2.1+dfsg-12+deb8u6_s390x.deb    2016-05-09 20:37     129K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_amd64.deb    2017-03-06 15:40     307K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_arm64.deb    2017-03-06 15:55     267K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_armel.deb    2017-03-06 17:40     264K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_armhf.deb    2017-03-06 17:25     266K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_i386.deb    2017-03-06 15:40     315K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_mips.deb    2017-03-06 20:56     239K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_mipsel.deb    2017-03-06 20:11     242K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_powerpc.deb    2017-03-06 15:45     265K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_ppc64el.deb    2017-03-06 15:55     270K
[ ]    qemu-guest-agent_2.8+dfsg-3~bpo8+1_s390x.deb    2017-03-06 15:55     285K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_amd64.deb    2020-01-31 21:00     308K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_arm64.deb    2020-01-31 21:45     283K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_armel.deb    2020-01-31 23:15     280K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_armhf.deb    2020-01-31 23:00     282K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_i386.deb    2020-01-31 20:55     315K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_mips.deb    2020-01-31 23:30     260K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_mips64el.deb    2020-01-31 21:10     259K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_mipsel.deb    2020-01-31 23:31     261K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_ppc64el.deb    2020-01-31 20:45     290K
[ ]    qemu-guest-agent_2.8+dfsg-6+deb9u9_s390x.deb    2020-01-31 20:50     304K
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!