Proxmox VE 7.1 released!

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
4,858
1,320
164
South Tyrol/Italy
shop.proxmox.com
Seems like it still isn't safe to upgrade to 7.1? Anyone that has done the upgrade NOT having disk issues?
We addressed both regression that happened on somewhat specific setups now and where the source of most reports seen here, so currently no outstanding regression in that area is known. Namely:
  • pve-qemu-kvm in version 6.1.0-3 fixes an issue that showed up with using VirtIO-block and VMs that had no much room to breathe CPU- and memory wise. We got positive feedback about that in the respective bug entry and some forum threads here.
  • pve-kernel-5.13.19-1-pve in version 5.13.19-3 addressed an issue that happened mostly with VMs using io-uring as async-IO driver and the SATA bus for disks. Initially it was assumed that Windows has to be in the mix here, but that was rather correlation with the SATA bus, as that is more commonly used in Windows, Linux has native VirtIO SCSI/Block support after all.
Both those issues involved setups with some specific settings, once we got them reproducible @Fabian_E managed to track down the offending changes via some not so trivial bisecting of kernel and QEMU, respectively, and found also the respective fixes - which are now rolled out to all repositories.
 
Last edited:

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
4,858
1,320
164
South Tyrol/Italy
shop.proxmox.com
Passthrough issues have nothing to do with the aforementioned disk/storage corruption/errors.
Also, due to the wide options of different HW, firmware (motherboard and pass-through device), kernel, qemu versions, where often only very specific set of combinations work with passthrough, it's rather a best-effort tech for the general case, not only in Proxmox VE but any hyper-visor.
 
Last edited:

sbellon

New Member
Oct 12, 2021
8
0
1
45
Not sure I understand your last sentence ... are you saying, when using PCI passthrough it may very well happen that there are regressions when upgrading?
 

avw

Renowned Member
May 31, 2020
1,025
177
63
Not sure I understand your last sentence ... are you saying, when using PCI passthrough it may very well happen that there are regressions when upgrading?
That did happen to me, and I believe some other people as well, when going from pve-kernel-5.11.22-5 to 5.11.22-7 (and 5.13+). I do have the impression that passthrough is not a common use case for most hardware vendors and therfore not tested or designed for. But I've been more or less happy with it since PVE 3.2.
 

sbellon

New Member
Oct 12, 2021
8
0
1
45
I see.

I started to build my virtualized OPNsense router on Proxmox 7.0 with NIC PCI passthrough. It works beautifully and I'm absolutely happy with the setup so far (also running Unifi, Pi-hole and PBS as containers).

However, a report like https://forum.proxmox.com/threads/h...o-7-1-router-vm-with-pass-through-nic.100091/ (which uses most likely the same or almost the same hardware than I do) makes me horrified and reluctant to upgrade to 7.1, because if I do and it doesn't work anymore ... I'm cut off from the internet.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
4,858
1,320
164
South Tyrol/Italy
shop.proxmox.com
I'm saying that PCI(e) passthrough can often be a bit delicate, more on some setups and less on others, but especially with non-enterprise HW it may stop, but also start, working with major kernel (and other) changes, albeit often it can be fixed again.
Some user here use it quite successfully in their setups, and we also have some test setups/workstations with pass through here.
However, a report like https://forum.proxmox.com/threads/h...o-7-1-router-vm-with-passwith the sheer-through-nic.100091/ (which uses most likely the same or almost the same hardware than I do) makes me horrified and reluctant to upgrade to 7.1, because if I do and it doesn't work anymore ... I'm cut off from the internet.
NIC pass-through can be less problematic than GPU, they're slightly less complex - but as you said yourself, if that breaks your network access may be impacted, so far from ideal. Note though that the old kernel stays installed, so you can reboot into it to investigate or at least restore functionality again. FWICT, the linked thread's issue was not directly the NIC pass through itself.

Note, I'd not expect that pass-through breaks on every occasion, if that happens you may be truly unlucky with the HW in use, but the possibility of needing to adapt to changes and temporary breakage is something you should have in mind when doing major upgrades, for more productive environments I definitively would recommend a test setup.
 

patch

Member
Aug 5, 2021
31
0
6
32
the linked thread's issue was not directly the NIC pass through itself
Agree
  • Pass through complicated tracking the bug but I believe turned out not to be the cause (as did the current complexity in rolling back the kernel).
  • The root cause is how 7.1 behaves when no DHCP / DNS /external route is available when Proxmox boots.
  • The work around is to add a second static entry to the external DHCP, replicating that statically set in Proxmox.
 
Last edited:

wigor

Member
Dec 5, 2019
50
10
13
What is your vm config? What IO drivers, what OS?
virtio-scsi, windows (10) & linux guests, older and newer kernel.
One very old machine with 2.2 kernel tested, with ide.
But no virtio-blk and no sata.
 
  • Like
Reactions: tjk

Taledo

New Member
Nov 20, 2020
23
1
3
51
Good Day.

Can we assume issues regarding IO are fixed in the no subscription repos?

Best Regards
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
4,858
1,320
164
South Tyrol/Italy
shop.proxmox.com
Can we assume issues regarding IO are fixed in the no subscription repos?
Those mentioned in my post above are and as also written there, we're not aware of similar regressions in the IO stack:
We addressed both regression that happened on somewhat specific setups now and where the source of most reports seen here, so currently no outstanding regression in that area is known. Namely:
  • pve-qemu-kvm in version 6.1.0-3 fixes an issue that showed up with using VirtIO-block and VMs that had no much room to breathe CPU- and memory wise. We got positive feedback about that in the respective bug entry and some forum threads here.
  • pve-kernel-5.13.19-1-pve in version 5.13.19-3 addressed an issue that happened mostly with VMs using io-uring as async-IO driver and the SATA bus for disks. Initially it was assumed that Windows has to be in the mix here, but that was rather correlation with the SATA bus, as that is more commonly used in Windows, Linux has native VirtIO SCSI/Block support after all.
Both those issues involved setups with some specific settings, once we got them reproducible @Fabian_E managed to track down the offending changes via some not so trivial bisecting of kernel and QEMU, respectively, and found also the respective fixes - which are now rolled out to all repositories.
 
  • Like
Reactions: Taledo

Taledo

New Member
Nov 20, 2020
23
1
3
51
Hi all.

The affected node has been updated and restarted on our end. IO issues are gone, and installing ubuntu in a vm no longer results in long waiting times.

Cheers
 

uncle.cripple

Member
Jul 30, 2019
11
3
8
43
I upgraded from PVE 6.4 to PVE 7.1 last week (2021-12-02) and still experience the freezing problems over the last week with my Windows VMs (2012R2/2016) which are using SATA-disks. The freezing seems most likely to happen when the integrated Windows Server Backup starts due to higher disk I/O. But i only can reproduce the VM freezing occasionally when starting a Windows Server Backup. All disk images are mounted via NFS from TrueNAS to the PVE nodes. No problems with my Linux-VMs so for - all Ubuntu with VirtIO SCSI.

Code:
2021-12-02 12:59:54 status installed pve-qemu-kvm:amd64 6.1.0-3
2021-12-02 12:57:01 status installed pve-kernel-5.13.19-1-pve:amd64 5.13.19-3

I updated to Kernel 5.13.19-2-pve on 2021-12-03 which didn't fix the freezing issue for me.
Code:
2021-12-02 13:00:52 status installed initramfs-tools:all 0.140
2021-12-03 12:30:43 status installed pve-kernel-helper:all 7.1-6
2021-12-03 12:31:46 status installed pve-kernel-5.13.19-2-pve:amd64 5.13.19-4
2021-12-03 12:31:46 status installed pve-kernel-5.13:all 7.1-5

The VMs looks like this when freezing and are not accessible any more. Guest Agent and Ballooning device are installed in Windows (latest version from virtio-win-0.1.208.iso) but not reporting anymore. Only stopping and starting them makes them accessible via VNC or Remote Desktop again. Daily Windows Server Backups start at 20:00 and you see the spikes in memory and therefore freezing shortly after.

20211208_gandalf-frozen.png

I updated to the following packages yesterday afternoon:
Code:
2021-12-09 17:32:49 status installed lxcfs:amd64 4.0.11-pve1
2021-12-09 17:32:50 status installed lxc-pve:amd64 4.0.11-1
2021-12-09 17:32:50 status installed pve-container:all 4.1-3
2021-12-09 17:32:56 status installed pve-manager:amd64 7.1-8
2021-12-09 17:32:56 status installed pve-ha-manager:amd64 3.3-1
2021-12-09 17:32:56 status installed libc-bin:amd64 2.31-13+deb11u2
2021-12-09 17:33:00 status installed man-db:amd64 2.9.4-2

Since then i have not encountered a freezing Windows VM again and could not trigger one freezing by starting a Windows Backup. But that could be pure luck...

Current state of a PVE nodes:
Bash:
# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.4: 6.4-10
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.4.151-1-pve: 5.4.151-1
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

I have NOT implemented the following workaround yet.
Hi,

this is known since relative recently, many thanks for your report anyhow! It affects mainly Windows VMs (but those are the ones that use SATA with a higher probability due to it working out of the box) and can be worked around by switching the disk's Async IO mode to threads (for cache = write back/through) or native (for cache = off, none or direct sync), respectively.
1637255182175-png.31526


Note that it seems to really be the full combination of Kernel 5.13 and SATA and io_uring and-maybe Windows, changing anything of the former three makes it work again, cannot say for sure that it has to be windows too though.

Note that SATA is really not the best choice for a VM's disk bus in general, rather use (VirtIO-) SCSI for best performance and feature set, Windows VirtIO support is available through https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers
@t.lamprecht Do i have to reboot the VM after changing AIO from io_uring to threads? Or does the change happen without rebooting the VM?

Or should i switch from SATA to IDE Disks?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
4,858
1,320
164
South Tyrol/Italy
shop.proxmox.com
I upgraded from PVE 6.4 to PVE 7.1 last week (2021-12-02) and still experience the freezing problems over the last week with my Windows VMs (2012R2/2016) which are using SATA-disks. The freezing seems most likely to happen when the integrated Windows Server Backup starts due to higher disk I/O. But i only can reproduce the VM freezing occasionally when starting a Windows Server Backup. All disk images are mounted via NFS from TrueNAS to the PVE nodes.
What hardware do you use (e.g., CPU) and how does a full VM config of one of the affected guests looks like?

@t.lamprecht Do i have to reboot the VM after changing AIO from io_uring to threads? Or does the change happen without rebooting the VM?

You can also live migrate it to another host in the cluster to bring the change in IO async mode in effect, but a full VM restart naturally works too.

Or should i switch from SATA to IDE Disks?
If you can, I'd suggest trying to install VirtIO drivers and use SCSI for the disk bus and VirtIO-SCSI for the SCSI controller also for the Windows VMs, should give you also a bit better performance compared to SATA https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers
 

uncle.cripple

Member
Jul 30, 2019
11
3
8
43
Thanks for the quick reply.
What hardware do you use (e.g., CPU) and how does a full VM config of one of the affected guests looks like?
- Dell PowerEdge R440 with 40 x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz (2 Sockets)
- Dell PowerEdge R630 with 48 x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (2 Sockets)
- Fujitsu PRIMERGY RX2530 M1 with 48 x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (2 Sockets)

Windows Server 2012R2
Code:
# cat /etc/pve/qemu-server/192010.conf
agent: 1
bootdisk: sata0
cores: 6
ide2: none,media=cdrom
memory: 24576
name: gandalf-old
net0: virtio=CA:C7:99:2B:3D:DF,bridge=vmbr0,firewall=1
net1: virtio=8E:0F:0B:7A:88:79,bridge=vmbr4,firewall=1
numa: 0
ostype: win8
protection: 1
sata0: nas02-images:192010/vm-192010-disk-0.qcow2,aio=threads,cache=writethrough,discard=on,size=1T,ssd=1
smbios1: uuid=24B54D56-824E-893C-B4A8-54D926CC8DE1,base64=1,manufacturer=SFA=,product=UHJvTGlhbnQgREwxODAgRzY=,serial=Q1pKMDM0MEZSSg==
sockets: 1
startup: order=1
vmgenid: c4166171-94c8-4960-9128-48ae70fc234b

Windows Server 2016
Code:
# cat /etc/pve/qemu-server/100010.conf
agent: 1
bootdisk: sata0
cores: 6
ide2: none,media=cdrom
memory: 16384
name: gandalf
net0: virtio=56:0E:D8:7E:5C:57,bridge=vmbr100,firewall=1
net1: virtio=8E:FE:00:4A:F9:04,bridge=vmbr4,firewall=1
numa: 0
ostype: win10
protection: 1
sata0: nas01-images:100010/vm-100010-disk-0.qcow2,aio=threads,cache=writethrough,discard=on,size=1T,ssd=1
smbios1: uuid=73270af3-a006-41bc-b923-3a5e259e6daf,base64=1,product=UHJveG1veCBWTQ==,serial=MTAwMDEw
sockets: 1
startup: order=2
unused0: nas02-images:100010/vm-100010-disk-0.qcow2
vmgenid: 3a2f98c4-df37-448e-a6c1-bf7a91604ea9

You can also live migrate it to another host in the cluster to bring the change in IO async mode in effect, but a full VM restart naturally works too.
Thanks. I live migrated the VMs for the moment. You can already see aio=threads in the configs above. I changed that a couple of minutes ago.

If you can, I'd suggest trying to install VirtIO drivers and use SCSI for the disk bus and VirtIO-SCSI for the SCSI controller also for the Windows VMs, should give you also a bit better performance compared to SATA https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers
Thanks for the tip. The VirtIO drivers are already installed. I try switching to VirtIO-SCSI if aio=threads won't help.
 

uncle.cripple

Member
Jul 30, 2019
11
3
8
43
I just glanced over the VM configs and Task histories and i don't know if it's important. But for the Windows 2016 VM you see unused0: nas02-images:100010/vm-100010-disk-0.qcow2. I moved the disk to a different NAS on 2021-12-09 at around 10:00 o'clock in the morning - previously it was on the same NAS than the Windows 2012R2 VM. And from that moment on i did not encounter any freezing problems with all my Windows VMs anymore (Async IO was set to io_uring then). Maybe that's just a coincidence or maybe it's a bandwith related problem for the virtual disks. On both VMs the Windows Server Backup is configured to start at 20:00.

To corroborate that, as the Windows Server Backups failed with the freezing VMs i tried the built in Proxmox Backup before i moved the disk from the Windows 2016 VM to a different NAS and both Proxmox Backups failed with the following errors and also freezing the VMs. Never tried/enabled the Proxmox Backup again after i moved the disk though.

Windows Server 2012R2 (VM ID192010)
Code:
INFO: starting new backup job: vzdump 192010 100010 --quiet 1 --prune-backups 'keep-last=5,keep-weekly=1' --compress zstd --mailto xx@yy --storage nas04-backup --mailnotification failure --mode snapshot
INFO: skip external VMs: 100010
INFO: Starting Backup of VM 192010 (qemu)
INFO: Backup started at 2021-12-09 00:30:06
INFO: status = running
INFO: VM Name: gandalf-old
INFO: include disk 'sata0' 'nas02-images:192010/vm-192010-disk-0.qcow2' 1T
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/nas04-backup/dump/vzdump-qemu-192010-2021_12_09-00_30_06.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 192010 qmp command 'guest-fsfreeze-thaw' failed - got timeout
INFO: started backup task 'ad8d296e-e736-4e85-bd2f-313777d3b09f'
INFO: resuming VM again
INFO:   0% (245.4 MiB of 1.0 TiB) in 3s, read: 81.8 MiB/s, write: 55.5 MiB/s
INFO:   1% (10.2 GiB of 1.0 TiB) in 11m 23s, read: 15.1 MiB/s, write: 14.9 MiB/s
...
INFO:  56% (573.4 GiB of 1.0 TiB) in 3h 26m 18s, read: 35.5 MiB/s, write: 35.2 MiB/s
INFO:  57% (583.7 GiB of 1.0 TiB) in 3h 31m 48s, read: 31.8 MiB/s, write: 31.3 MiB/s
ERROR: VM 192010 qmp command 'query-backup' failed - got timeout
INFO: aborting backup job
ERROR: VM 192010 qmp command 'backup-cancel' failed - unable to connect to VM 192010 qmp socket - timeout after 5978 retries
INFO: resuming VM again
ERROR: Backup of VM 192010 failed - VM 192010 qmp command 'cont' failed - unable to connect to VM 192010 qmp socket - timeout after 450 retries
INFO: Failed at 2021-12-09 04:23:53
INFO: Backup job finished with errors
TASK ERROR: job errors

Windows Server 2016 (VM ID 100010)
Code:
INFO: starting new backup job: vzdump 192010 100010 --mode snapshot --compress zstd --prune-backups 'keep-last=5,keep-weekly=1' --mailnotification failure --mailto xx@yy --quiet 1 --storage nas04-backup
INFO: skip external VMs: 192010
INFO: Starting Backup of VM 100010 (qemu)
INFO: Backup started at 2021-12-09 00:30:02
INFO: status = running
INFO: VM Name: gandalf
INFO: include disk 'sata0' 'nas02-images:100010/vm-100010-disk-0.qcow2' 1T
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/nas04-backup/dump/vzdump-qemu-100010-2021_12_09-00_30_02.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'a1d8362c-8593-4dc7-bfd7-4c0fd3524df1'
INFO: resuming VM again
INFO:   0% (616.2 MiB of 1.0 TiB) in 3s, read: 205.4 MiB/s, write: 123.3 MiB/s
ERROR: VM 100010 qmp command 'query-backup' failed - got timeout
INFO: aborting backup job
ERROR: VM 100010 qmp command 'backup-cancel' failed - unable to connect to VM 100010 qmp socket - timeout after 5983 retries
INFO: resuming VM again
ERROR: Backup of VM 100010 failed - VM 100010 qmp command 'cont' failed - unable to connect to VM 100010 qmp socket - timeout after 450 retries
INFO: Failed at 2021-12-09 00:56:26
INFO: Backup job finished with errors
TASK ERROR: job errors
 

VR_Architect

New Member
Dec 13, 2021
10
0
1
55
Trying to upgrade from 7.0 to 7.1 and it is failing on reading the headers from proxmox website. I have the correct non-subscription deb's. I noticed if I use the browser to go to the repository url, the amd64 folder is called "binary-amd64". Could this be the issue? The folder should be called "amd64"? Also, not the browser has "pve/dist/bulleye" but the upgrade is trying to get "pve bullseye".

I have full access to the internet, I installed 7.0 about 3 weeks ago with no issue and was able to upgrade four of my servers to 7.1 with no issue then. Now I need to upgrade the other 12 servers.

URL in browser:
http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/

Upgrade message:
Code:
Starting system upgrade: apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  libjs-qrcodejs libposix-strptime-perl pve-kernel-5.13
  pve-kernel-5.13.19-2-pve
The following packages will be upgraded:
  libnss3 libpve-access-control libpve-cluster-api-perl
  libpve-cluster-perl libpve-common-perl
  libpve-guest-common-perl libpve-http-server-perl
  libpve-rs-perl libpve-storage-perl
  proxmox-backup-client proxmox-backup-file-restore
  proxmox-mini-journalreader proxmox-ve
  proxmox-widget-toolkit pve-cluster pve-container
  pve-docs pve-edk2-firmware pve-i18n pve-kernel-helper
  pve-manager pve-qemu-kvm qemu-server
23 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 136 MB/137 MB of archives.
After this operation, 407 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Err:1 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libjs-qrcodejs all 1.20201119-pve1
  Connection failed [IP: 144.217.225.162 80]
Err:2 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-cluster-api-perl all 7.1-2
  Connection failed [IP: 144.217.225.162 80]
Err:3 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-cluster-perl all 7.1-2
  Connection failed [IP: 144.217.225.162 80]
Err:4 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-cluster amd64 7.1-2
  Connection failed [IP: 144.217.225.162 80]
Err:5 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-guest-common-perl all 4.0-3
  Connection failed [IP: 144.217.225.162 80]
Err:6 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-storage-perl all 7.0-15
  Connection failed [IP: 144.217.225.162 80]
Err:7 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-http-server-perl all 4.0-4
  Connection failed [IP: 144.217.225.162 80]
Err:8 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 proxmox-backup-client amd64 2.1.2-1
  Connection failed [IP: 144.217.225.162 80]
Err:9 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 proxmox-backup-file-restore amd64 2.1.2-1
  Connection failed [IP: 144.217.225.162 80]
Err:10 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 qemu-server amd64 7.1-4
  Connection failed [IP: 144.217.225.162 80]
Err:11 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-common-perl all 7.0-14
  Connection failed [IP: 144.217.225.162 80]
Err:12 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-edk2-firmware all 3.20210831-2
  Connection failed [IP: 144.217.225.162 80]
Err:13 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-qemu-kvm amd64 6.1.0-3
  Connection failed [IP: 144.217.225.162 80]
Err:14 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-manager amd64 7.1-7
  Connection failed [IP: 144.217.225.162 80]
Err:15 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-access-control all 7.1-5
  Connection failed [IP: 144.217.225.162 80]
Err:16 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-rs-perl amd64 0.4.4
  Connection failed [IP: 144.217.225.162 80]
Err:17 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 proxmox-mini-journalreader amd64 1.3-1
  Connection failed [IP: 144.217.225.162 80]
Err:18 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 proxmox-widget-toolkit all 3.4-4
  Connection failed [IP: 144.217.225.162 80]
Err:19 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-container all 4.1-2
  Connection failed [IP: 144.217.225.162 80]
Err:20 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-docs all 7.1-2
  Connection failed [IP: 144.217.225.162 80]
Err:21 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-i18n all 2.6-2
  Connection failed [IP: 144.217.225.162 80]
Err:22 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-kernel-helper all 7.1-6
  Connection failed [IP: 144.217.225.162 80]
Err:23 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-kernel-5.13.19-2-pve amd64 5.13.19-4
  Connection failed [IP: 144.217.225.162 80]
Err:24 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-kernel-5.13 all 7.1-5
  Connection failed [IP: 144.217.225.162 80]
Err:25 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 proxmox-ve all 7.1-1
  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libjs-qrcodejs_1.20201119-pve1_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libpve-cluster-api-perl_7.1-2_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libpve-cluster-perl_7.1-2_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-cluster_7.1-2_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libpve-guest-common-perl_4.0-3_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libpve-storage-perl_7.0-15_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libpve-http-server-perl_4.0-4_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/proxmox-backup-client_2.1.2-1_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/proxmox-backup-file-restore_2.1.2-1_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/qemu-server_7.1-4_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libpve-common-perl_7.0-14_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-edk2-firmware_3.20210831-2_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-qemu-kvm_6.1.0-3_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-manager_7.1-7_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libpve-access-control_7.1-5_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/libpve-rs-perl_0.4.4_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/proxmox-mini-journalreader_1.3-1_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/proxmox-widget-toolkit_3.4-4_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-container_4.1-2_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-docs_7.1-2_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-i18n_2.6-2_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-helper_7.1-6_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.13.19-2-pve_5.13.19-4_amd64.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/pve-kernel-5.13_7.1-5_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/proxmox-ve_7.1-1_all.deb  Connection failed [IP: 144.217.225.162 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

System not fully up to date (found 26 new packages)


deb's:
 
Last edited:

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
1,485
231
68
Hi,
Trying to upgrade from 7.0 to 7.1 and it is failing on reading the headers from proxmox website. I have the correct non-subscription deb's. I noticed if I use the browser to go to the repository url, the amd64 folder is called "binary-amd64". Could this be the issue? The folder should be called "amd64"? Also, not the browser has "pve/dist/bulleye" but the upgrade is trying to get "pve bullseye".
Code:
E: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/proxmox-ve_7.1-1_all.deb  Connection failed [IP: 144.217.225.162 80]
as you can see in the log, it is trying to fetch the correct file (APT is able to figure it out ;)), and the message indicates a connection error. Can you ping the IP/download the file manually?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!