"Too many redirections (599)"

Dunuin

Distinguished Member
Jun 30, 2020
13,836
4,049
243
Germany
I sometimes see this message when opening the summary of a VM or summary of a storage:
toomanyredirections.png
Google just finds 4 hits for "Too many redirections (599)" and none is explaining why that is happening or if/how it could be fixed.
And especially opening a storage summary can be quite slow (it needs 5 to 10 seconds to load a ZFS/LVM-Thin storage summary) while IO delay of the host is always below 1% and CPU utilization below 15% and all storages are enterprise SSDs (each SSD averaging at below 100 IO/s and below 2 MB/s read+written).

Someone knows what this error message means?

I access the WebUI using Chrome directly with a Client in the same subnet using the local IP.

Code:
root@Hypervisor:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
pve-kernel-helper: 7.1-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.11.22-7-pve: 5.11.22-12
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksmtuned: 4.20150326
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-7
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-5
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-6
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

Code:
root@Hypervisor:~# pveperf
CPU BOGOMIPS:      134406.72
REGEX/SECOND:      2466567
HD SIZE:           20.97 GB (/dev/mapper/vgpmx-lvroot)
BUFFERED READS:    268.54 MB/sec
AVERAGE SEEK TIME: 0.10 ms
FSYNCS/SECOND:     1772.25
DNS EXT:           50.15 ms
 
Last edited:
This is purely speculation on my part, but I'm wondering if it might be some sort of API limit or simply weird behavior. I only say this because I noticed the same message while running a lot(?) of API calls while also trying to view the VM summary page and can't say I've noticed it before. It could've been purely a coincidence for all I know. Nothing majorly impacted, but was curious about it and also tried to search for it.

Code:
root@proxmox:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.35-3-pve)
pve-manager: 7.2-5 (running version: 7.2-5/12f1e639)
pve-kernel-5.15: 7.2-5
pve-kernel-helper: 7.2-5
pve-kernel-5.4: 6.4-15
pve-kernel-5.3: 6.1-6
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.4.174-2-pve: 5.4.174-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.3-1
proxmox-backup-file-restore: 2.2.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-10
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

Code:
root@proxmox:~# pveperf
CPU BOGOMIPS:      207801.40
REGEX/SECOND:      1879775
HD SIZE:           1798.18 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     1060.11
DNS EXT:           34.32 ms
DNS INT:           23.97 ms (local)

Here are some logs I found of only a couple 599's and surrounding calls in /var/log/pveproxy:

Code:
ipaddr.56 - api-user-and-token [06/07/2022:01:11:58 -0500] "GET /api2/json/nodes/proxmox/qemu/116/status/current HTTP/1.1" 200 2789
ipaddr.56 - api-user-and-token [06/07/2022:01:11:58 -0500] "GET /api2/json/pools HTTP/1.1" 200 11
ipaddr.56 - api-user-and-token [06/07/2022:01:11:58 -0500] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 12125
ipaddr.56 - api-user-and-token [06/07/2022:01:11:58 -0500] "GET /api2/json/nodes/proxmox/qemu/117/config HTTP/1.1" 200 648
ipaddr.56 - api-user-and-token [06/07/2022:01:11:58 -0500] "GET /api2/json/nodes/proxmox/storage/local/status HTTP/1.1" 200 140
ipaddr.56 - api-user-and-token [06/07/2022:01:11:58 -0500] "GET /api2/json/nodes/proxmox/storage/local/status HTTP/1.1" 200 140
ipaddr.56 - api-user-and-token [06/07/2022:01:11:58 -0500] "GET /api2/json/cluster/ha/resources/114 HTTP/1.1" 500 13
ipaddr.56 - api-user-and-token [06/07/2022:01:11:58 -0500] "GET /api2/json/cluster/ha/resources/102 HTTP/1.1" 500 13
ipaddr.56 - api-user-and-token [06/07/2022:01:11:59 -0500] "GET /api2/json/cluster/ha/resources/102 HTTP/1.1" 500 13
ipaddr.56 - api-user-and-token [06/07/2022:01:11:59 -0500] "GET /api2/json/cluster/ha/resources/114 HTTP/1.1" 500 13
ipaddr.78 - root@pam [06/07/2022:01:11:59 -0500] "GET /api2/extjs/nodes/proxmox/qemu/106/pending HTTP/1.1" 200 521
ipaddr.78 - root@pam [06/07/2022:01:11:59 -0500] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1008
ipaddr.78 - root@pam [06/07/2022:01:12:00 -0500] "GET /api2/json/nodes/proxmox/qemu/120/pending HTTP/1.1" 200 956
ipaddr.56 - api-user-and-token [06/07/2022:01:12:01 -0500] "GET /api2/json/cluster/ha/resources/114 HTTP/1.1" 500 13
ipaddr.56 - api-user-and-token [06/07/2022:01:12:01 -0500] "GET /api2/json/cluster/ha/resources/102 HTTP/1.1" 500 13
ipaddr.78 - root@pam [06/07/2022:01:12:01 -0500] "GET /api2/extjs/nodes/proxmox/qemu/105/pending HTTP/1.1" 200 1020
ipaddr.78 - root@pam [06/07/2022:01:12:01 -0500] "GET /api2/json/nodes/proxmox/qemu/120/status/current HTTP/1.1" 200 970
ipaddr.56 - api-user-and-token [06/07/2022:01:12:02 -0500] "GET /api2/json/nodes/proxmox/storage/local/content HTTP/1.1" 200 11373
ipaddr.56 - api-user-and-token [06/07/2022:01:12:02 -0500] "GET /api2/json/nodes/proxmox/qemu/125/status/current HTTP/1.1" 599 -
ipaddr.56 - api-user-and-token [06/07/2022:01:12:02 -0500] "GET /api2/json/nodes/proxmox/storage/local/status HTTP/1.1" 200 140
ipaddr.56 - api-user-and-token [06/07/2022:01:12:02 -0500] "GET /api2/json/cluster/ha/resources/117 HTTP/1.1" 500 13
ipaddr.78 - root@pam [06/07/2022:01:12:02 -0500] "GET /api2/extjs/nodes/proxmox/qemu/105/pending HTTP/1.1" 200 1020
ipaddr.56 - api-user-and-token [06/07/2022:01:12:03 -0500] "GET /api2/json/nodes/proxmox/qemu/125/status/current HTTP/1.1" 200 2781
ipaddr.56 - api-user-and-token [06/07/2022:01:12:03 -0500] "GET /api2/json/pools HTTP/1.1" 200 11
ipaddr.56 - api-user-and-token [06/07/2022:01:12:03 -0500] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 12126
ipaddr.56 - api-user-and-token [06/07/2022:01:12:03 -0500] "GET /api2/json/nodes/proxmox/qemu/104/config HTTP/1.1" 200 668
ipaddr.56 - api-user-and-token [06/07/2022:01:12:03 -0500] "GET /api2/json/cluster/ha/resources/117 HTTP/1.1" 500 13
ipaddr.56 - api-user-and-token [06/07/2022:01:12:04 -0500] "GET /api2/json/nodes/proxmox/qemu/114/status/current HTTP/1.1" 200 2809
ipaddr.56 - api-user-and-token [06/07/2022:01:12:04 -0500] "GET /api2/json/pools HTTP/1.1" 200 11
ipaddr.56 - api-user-and-token [06/07/2022:01:12:04 -0500] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 12126
ipaddr.56 - api-user-and-token [06/07/2022:01:12:04 -0500] "GET /api2/json/nodes/proxmox/qemu/115/config HTTP/1.1" 200 656
ipaddr.78 - root@pam [06/07/2022:01:12:04 -0500] "GET /api2/json/cluster/resources HTTP/1.1" 200 2809
ipaddr.56 - api-user-and-token [06/07/2022:01:12:05 -0500] "GET /api2/json/cluster/ha/resources/117 HTTP/1.1" 500 13
ipaddr.78 - root@pam [06/07/2022:01:12:06 -0500] "GET /api2/extjs/nodes/proxmox/qemu/106/pending HTTP/1.1" 200 513
ipaddr.78 - root@pam [06/07/2022:01:12:06 -0500] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1000
ipaddr.78 - root@pam [06/07/2022:01:12:07 -0500] "GET /api2/json/nodes/proxmox/qemu/120/status/current HTTP/1.1" 200 929
ipaddr.56 - api-user-and-token [06/07/2022:01:12:07 -0500] "GET /api2/json/nodes/proxmox/storage/local/content HTTP/1.1" 200 11373
ipaddr.56 - api-user-and-token [06/07/2022:01:12:07 -0500] "GET /api2/json/nodes/proxmox/qemu/102/status/current HTTP/1.1" 599 -
ipaddr.78 - root@pam [06/07/2022:01:12:08 -0500] "GET /api2/extjs/nodes/proxmox/qemu/105/pending HTTP/1.1" 200 1020
ipaddr.56 - api-user-and-token [06/07/2022:01:12:08 -0500] "GET /api2/json/nodes/proxmox/storage/local/status HTTP/1.1" 200 140
ipaddr.56 - api-user-and-token [06/07/2022:01:12:08 -0500] "GET /api2/json/cluster/ha/resources/104 HTTP/1.1" 500 13
ipaddr.56 - api-user-and-token [06/07/2022:01:12:08 -0500] "GET /api2/json/nodes/proxmox/qemu/117/status/current HTTP/1.1" 200 2791
ipaddr.56 - api-user-and-token [06/07/2022:01:12:08 -0500] "GET /api2/json/pools HTTP/1.1" 200 11
 
We have this from time to time if we keep the BrowserTabs open for several days.
We are using a RDP-Terminal-Manageent-Server with "Edge"-Browser to have all Clusters open in dedicated TABS and often just disconnect in the evening and reconnect next morning. This sometimes happens or we just get timeout, cause the session seems to timeout and we need to reload page and logon again. Really annoying sometimes....
 
I think it was caused here by an offsite PBS storage (Tuxis) in combination with bad/slow DNS resolution. Previously my DNS worked like this: Unbound on OPNsense as forwarding DNS server -> Pihole VM -> DNScrypt-Proxy on OPNsense

With that accessing any storage took around 10 seconds to respond unless I deaktivated the Tuxis PBS storage. With that deactivated it only took around 1 second.

Then I changed my DNS setup to "Unbound on OPNsense as forwarding DNS server -> Pihole LXC -> Unbound Recursive Resolution inside Pihole LXC" and with that accessing any storage responds within 1-2 Seconds even with Tuxis PBS enabled.

So right after changing my DNS setup these "too many redirections" messages disappeared.


Another case where I've seen this in the past was when I switched from zabbix-agent to zabbix-agent2 package on the PVE host where the webUI became very unresponsive and I think with these "too many redirections" popups too . Changed it back to zabbix-agent package what fixed it.

Not sure why the zabbix-agent2 could cause it but maybe with the faulty DNS resolution the "pvesm status" calls in the backend were piling up?
 
Last edited:
  • Like
Reactions: itNGO
We have this from time to time if we keep the BrowserTabs open for several days.
We are using a RDP-Terminal-Manageent-Server with "Edge"-Browser to have all Clusters open in dedicated TABS and often just disconnect in the evening and reconnect next morning. This sometimes happens or we just get timeout, cause the session seems to timeout and we need to reload page and logon again. Really annoying sometimes....
That's interesting! I end up closing my browser when I shut down my desktop nightly, but it's often opened for the rest of the day otherwise. Mine does recover from the 599's, it just seems to take a second.

I think it was caused here by an offsite PBS storage (Tuxis) in combination with bad/slow DNS resolution. Previously my DNS worked like this: Unbound on OPNsense as forwarding DNS server -> Pihole VM -> DNScrypt-Proxy on OPNsense

With that accessing any storage took around 10 seconds to respond unless I deaktivated the Tuxis PBS storage. With that deactivated it only took around 1 second.

Then I changed my DNS setup to "Unbound on OPNsense as forwarding DNS server -> Pihole LXC -> Unbound Recursive Resolution inside Pihole LXC" and with that accessing any storage responds within 1-2 Seconds even with Tuxis PBS enabled.

So right after changing my DNS setup these "too many redirections" messages disappeared.


Another case where I've seen this in the past was when I switched from zabbix-agent to zabbix-agent2 package on the PVE host where the webUI became very unresponsive and I think with these "too many redirections" popups too . Changed it back to zabbix-agent package what fixed it.

Not sure why the zabbix-agent2 could cause it but maybe with the faulty DNS resolution the "pvesm status" calls in the backend were piling up?
I have my connected storage mounted via IP address.

Are you able to run the pveperf command? I'd be interested in seeing what the timings are for your DNS lookups. I have a slightly similar setup: Client -> Pihole -> BIND. While I was typing that I just remembered that my Proxmox host is only using 1.1.1.1/1.0.0.1 for DNS because I didn't want it to rely on DNS it hosted itself (It's only a homelab).

I did recently install/configure zabbix-agent on the Proxmox host though, so that's interesting that you mention that.
Code:
root@proxmox:~# apt list --installed | grep zabbix

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

zabbix-agent/stable,now 1:5.0.8+dfsg-1 amd64 [installed]
 
Are you able to run the pveperf command? I'd be interested in seeing what the timings are for your DNS lookups. I have a slightly similar setup: Client -> Pihole -> BIND. While I was typing that I just remembered that my Proxmox host is only using 1.1.1.1/1.0.0.1 for DNS because I didn't want it to rely on DNS it hosted itself (It's only a homelab).
Code:
root@Hypervisor:~# pveperf
CPU BOGOMIPS:      134406.40
REGEX/SECOND:      2425406
HD SIZE:           20.97 GB (/dev/mapper/vgpmx-lvroot)
BUFFERED READS:    271.08 MB/sec
AVERAGE SEEK TIME: 0.10 ms
FSYNCS/SECOND:     1323.58
DNS EXT:           19.73 ms

I'm happy with my DNS setup now. Got two Pihole LXCs with failover using keepalived so in case one Pihole goes down the second LXC will be used (best used of cause when running two PVE nodes with one LXC on each node but you could also run just one LXC and the second pihole on an bare metal Raspberry Pi). Pihole blacklists/whitelists and blocklists are kept in sync between them using gravity-sync and each PiholeLXC runs its own local unbound recursive DNS resolver (so better for privacy as you don't need to rely on a single public DNS server that might filter/block domains or that could log all your browsing history).
 
  • Like
Reactions: Tuxington

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!