[SOLVED] After 8.1.3 upgrade Summary Graphs are blank

Thanks @iay -- I had an iSCSI target on a device that is no longer being used by Proxmox -- it was getting the --login failed message and making pvestatd take too long. Fixing the problem immediately fixed my graphs :)
 
Also thanks to @iay - we also had a problem with the iscsi login. As we moved all to ceph the iscsi target was no longer used. As the error could not be seen in the gui we did not know it. Now the graphs are all back.
 
I have been having this problem (empty graphs, nothing but timestamps in the "pvesh get /nodes/xxx/rrddata" output) as well. I fixed it in my case, but the underlying cause may have been different so to help other people coming this way I'll summarise.

In my case, as well as the graph issues I was also seeing a lot of UI connection failures and the syslog output had sections like this:

Dec 20 21:02:06 pve02 iscsid[923]: connection-1:0 cannot make a connection to 2a02:redacted:3260 (-1,101) Dec 20 21:02:06 pve02 iscsid[923]: connection-1:0 cannot make a connection to fe80::redacted:3260 (-1,22) Dec 20 21:02:08 pve02 pvestatd[820]: command '/usr/bin/iscsiadm --mode node --targetname iqn.redacted --login' failed: exit code 15 Dec 20 21:02:08 pve02 pvestatd[820]: status update time (244.601 seconds)

My guess is that pvestatd was just taking so long that the data it was collecting didn't make it into storage.

The reason that operation was taking a long time appears to have been that I've recently started running iSCSI to some Synology boxes, and Synology's DSM has some behaviour that causes the iSCSI stack in Proxmox some trouble: it advertises each target on every single IP address it possesses, then refuses to permit duplicate logins on the different addresses that all resolve to the same interface. The client then gets into a long loop trying to perform those redundant logins every few seconds until it times out. Everything works, but UI operations are extremely slow or may appear to fail. Interestingly, VMware ESXi has never had a problem with this weird behaviour but perhaps they have special-cased it.

The fix for this was to disable IPv6 on the Synology boxes entirely, then reboot one of the Proxmox nodes. That seems to be enough to cause the iSCSI stack on all nodes to rediscover the targets but now it only knows about a single, IPv4 address, so everything is happy. My graphs have twelve hours of data in them, the UI is back to being instant and nothing else in my network seems to have been relying on IPv6 to those boxes.

One final thing: I'm not really at all sure that my Proxmox nodes are configured for IPv6 at all. I'm still experimenting here and don't know what to expect. But I would expect that lacking connectivity would have helped, not hindered, if anything. Maybe not.

You're a hero :)

Pretty much the same symptoms and log entries, but I wasn't experiencing any UI slowness or timeouts like you mentioned. But I was running iSCSI on a synology with an auto-configured fe80 IPv6 address.

Simply disabled the IPv6 on the synology interface, rebooted my primary host, and now all my graphs are back for the first time since the 8.1.3 upgrade.

Something must have changed with the 8.1.3 upgrade, as my setup has been this way for a couple years, and the 6.x to 7.x upgrade went without issues.

Thanks again, it's nice to see everything working again =D
 
I encountered the same thing and after shutting down IPv6 on my Synology and rebooting PVE, things didn't get any better for me. What I found was iscsid was still trying to probe the IPv6 address associated with my NAS. I shutdown the VM using the iSCSI connection, disabled iSCSI in PVE (Datacenter -> Storage), opened a shell in PVE, change directory into /etc/iscsid/nodes/[iqn.XXXX]/ and removed the IPv6 address directory defined in there. Then I did a "service iscsid restart", reenabled my ISCSI definition, and restarted the VM using it. Problem solved. My graphs are back again.
 
Hi. The same problems. After the upgrade to PVE 8.1.3.

4 nodes in the cluster. The storage is Synology.

2 problems:
1. On all nodes, logs of the following type: iscsid[1363]: connection-1:0 cannot make a connection to fe80::211:32ff:fefe:dc69:3260 (-1,22)
2. Empty graphs for nodes.

I rebooted all the nodes, so they all updated to 8.1.3. As these 2 problems were, so they are. =(

Where exactly did you disable IPv6 on Synology?

And what are the solutions to these 2 problems?
 
Where exactly did you disable IPv6 on Synology?
Control Panel > Network > Network Interface > LAN 1 > Edit > IPv6 > IPv6 Setup: Off, then OK

I didn't need to restart the Synology. I did restart one of my Proxmox nodes. Your mileage may vary, some people had other experiences above I think.
 
  • Like
Reactions: Myst and alexkenon
Control Panel > Network > Network Interface > LAN 1 > Edit > IPv6 > IPv6 Setup: Off, then OK

I didn't need to restart the Synology. I did restart one of my Proxmox nodes. Your mileage may vary, some people had other experiences above I think.
Thanks for the quick response. I have only 2 tabs in the settings of all local interfaces: IPv4 and 802.1X.
There is no IPv6 tab.
 
I have been having this problem (empty graphs, nothing but timestamps in the "pvesh get /nodes/xxx/rrddata" output) as well. I fixed it in my case, but the underlying cause may have been different so to help other people coming this way I'll summarise.

In my case, as well as the graph issues I was also seeing a lot of UI connection failures and the syslog output had sections like this:

Dec 20 21:02:06 pve02 iscsid[923]: connection-1:0 cannot make a connection to 2a02:redacted:3260 (-1,101) Dec 20 21:02:06 pve02 iscsid[923]: connection-1:0 cannot make a connection to fe80::redacted:3260 (-1,22) Dec 20 21:02:08 pve02 pvestatd[820]: command '/usr/bin/iscsiadm --mode node --targetname iqn.redacted --login' failed: exit code 15 Dec 20 21:02:08 pve02 pvestatd[820]: status update time (244.601 seconds)

My guess is that pvestatd was just taking so long that the data it was collecting didn't make it into storage.

The reason that operation was taking a long time appears to have been that I've recently started running iSCSI to some Synology boxes, and Synology's DSM has some behaviour that causes the iSCSI stack in Proxmox some trouble: it advertises each target on every single IP address it possesses, then refuses to permit duplicate logins on the different addresses that all resolve to the same interface. The client then gets into a long loop trying to perform those redundant logins every few seconds until it times out. Everything works, but UI operations are extremely slow or may appear to fail. Interestingly, VMware ESXi has never had a problem with this weird behaviour but perhaps they have special-cased it.

The fix for this was to disable IPv6 on the Synology boxes entirely, then reboot one of the Proxmox nodes. That seems to be enough to cause the iSCSI stack on all nodes to rediscover the targets but now it only knows about a single, IPv4 address, so everything is happy. My graphs have twelve hours of data in them, the UI is back to being instant and nothing else in my network seems to have been relying on IPv6 to those boxes.

One final thing: I'm not really at all sure that my Proxmox nodes are configured for IPv6 at all. I'm still experimenting here and don't know what to expect. But I would expect that lacking connectivity would have helped, not hindered, if anything. Maybe not.
Thank you! Solved the issue!
 
Same Problem and I'm also using Synology iSCSI so probably the cause is exactly the same. However, I make use of Synology IPv6 in my LAN so I cannot simply disable it.

Is there a way to work around this (or even solve this) on the Proxmox side? @Moayad or anybody else from the Proxmox stuff do you maybe have an idea? It worked fine before 8.1 (or 8.0, I'm not exactly sure here) and there was no config change for a long time so something in Proxmox 8(.1) must have caused this...

 
  • Like
Reactions: alexkenon
Same Problem and I'm also using Synology iSCSI so probably the cause is exactly the same. However, I make use of Synology IPv6 in my LAN so I cannot simply disable it.

Is there a way to work around this (or even solve this) on the Proxmox side? @Moayad or anybody else from the Proxmox stuff do you maybe have an idea? It worked fine before 8.1 (or 8.0, I'm not exactly sure here) and there was no config change for a long time so something in Proxmox 8(.1) must have caused this...

In version 8.0.x (I think version 8.0.6), I did not have such a problem. It was only when we upgraded to 8.1.3 that this problem appeared. Of course, you can say something about Synology, but for 6 years of the existence of Synology and Proxmox VE, we have this problem only on version PVE 8.1.3. It's too bad that we don't get answers from moderators (technical support) who have direct access to the developers. =(

We are also a paid customer. We have been paying for the 6th year for a Community subscription for our entire server cluster.
Tech support guys, help us.
 
Hi,

Can the issue be reproduced in a test PVE lab? If yes - please, could you please provide us the steps?

That can help us to know where is the issue since I don't have seen any issue during the upgrade from Proxmox VE 7.x to Proxmox VE 8.x.
 
Hi,

Can the issue be reproduced in a test PVE lab? If yes - please, could you please provide us the steps?

That can help us to know where is the issue since I don't have seen any issue during the upgrade from Proxmox VE 7.x to Proxmox VE 8.x.

Unfortunately I don't have spare hardware at my hand to try. But judging from this thread it should be as easy as setting up a Proxmox cluster and adding a Synology iSCSI storage like so:
Code:
iscsi: Synology-iSCSI
        portal 10.0.0.100
        target iqn.2000-01.com.synology:Synology.PXE-Target-1.1111111111
        content images

Also like alexkenon confirmed the problem did not exist in 8.0. So it came somewhere from 8.0.x to 8.1.x.

I can also confirm that disabling IPv6 on the Synology works because luckily I run iSCSI over a dedicated 10Gbit fiber NIC on the Synology and here I actually don't need IPv6. So I disabled it just on that NIC and ran service iscsid restart followed by service rrdcached restart and the graphs instantly started working again. Still not a real solution but fine as long as one doesn't need IPv6 on the iSCSI NIC ( :
 
Hi,

Can the issue be reproduced in a test PVE lab? If yes - please, could you please provide us the steps?

That can help us to know where is the issue since I don't have seen any issue during the upgrade from Proxmox VE 7.x to Proxmox VE 8.x.
As Asano said, it's enough to set up a PVE cluster and add Synology storage (iscsi). After that, there will be such a problem. Maybe tech support already has something in the knowledge base of this problem? =(
 
I have the same problem using 8.1.3, it doesn't display the graphs, so I'm storing the time column in pvesh get nodes/pve4/rrddata --timeframe hour
 
Unfortunately I don't have spare hardware at my hand to try. But judging from this thread it should be as easy as setting up a Proxmox cluster and adding a Synology iSCSI storage like so:
Code:
iscsi: Synology-iSCSI
        portal 10.0.0.100
        target iqn.2000-01.com.synology:Synology.PXE-Target-1.1111111111
        content images

Also like alexkenon confirmed the problem did not exist in 8.0. So it came somewhere from 8.0.x to 8.1.x.

I can also confirm that disabling IPv6 on the Synology works because luckily I run iSCSI over a dedicated 10Gbit fiber NIC on the Synology and here I actually don't need IPv6. So I disabled it just on that NIC and ran service iscsid restart followed by service rrdcached restart and the graphs instantly started working again. Still not a real solution but fine as long as one doesn't need IPv6 on the iSCSI NIC ( :

I too can confirm this solves it. I was able to verify the work-around by temporarily disabling IPv6 on the Synology and restarting `iscsid` and `rrdcached`. Unfortunately I do use IPv6 (with plans of dropping IPv4 entirely) so this is not a workable solution in my case, but if you don't use/need IPv6 on your Synology it is indeed viable.
 
Thanks to everyone in this thread. My own issue is resolved, but maybe my experience will help someone else.

My situation was mildly different. Rather than any ipv6 being involved, my unplugged ether2 on my synology has self-assigned a 162.254.x.x address. An option in synology nas manager -> iscsi had "All network interfaces" selected. This has always been thus, with things humming along.

Yesterday, I had to power everything down to replace some batteries and graphing was no longer working. Following along in this thread, I found the nodes were picking up the 162.254.x.x address from the synology and installing them in /etc/iscsi/nodes/iqn... folder. See picture below. The left shows the directories and on the right I changed to "Only selected interfaces."

proxmox-synology-iscsi-thing.png

Once I selected only the active interface on the synology, I rebooted one of my nodes and shortly the graphing started working for all nodes.
 
Hi all, just working through this myself and have a ticket open with ProxMox support, coincidentally @Moayad who has replied in this thread already.

I'm happy to raise this to Synology Support as well, they appear to be a common trait in this issue and ultimately there could be a bug on their side too.

I've not seen mentioned, but I assume (rightly or wrongly) people are using multipath and have installed the multipath-tools on each of their nodes? I have three in my cluster, which is in a production state so I fix for this would be in my best interest personally.

Hopefully will get the work around sorted too. I've just done my in-place upgrades from 7.4 to 8.1 and mostly gone ok apart from a NIC changing its name just to make things a bit more interesting.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!