I have been having this problem (empty graphs, nothing but timestamps in the "pvesh get /nodes/xxx/rrddata" output) as well. I fixed it in my case, but the underlying cause may have been different so to help other people coming this way I'll summarise.
In my case, as well as the graph issues I was also seeing a lot of UI connection failures and the syslog output had sections like this:
Dec 20 21:02:06 pve02 iscsid[923]: connection-1:0 cannot make a connection to 2a02:redacted:3260 (-1,101) Dec 20 21:02:06 pve02 iscsid[923]: connection-1:0 cannot make a connection to fe80::redacted:3260 (-1,22) Dec 20 21:02:08 pve02 pvestatd[820]: command '/usr/bin/iscsiadm --mode node --targetname iqn.redacted --login' failed: exit code 15 Dec 20 21:02:08 pve02 pvestatd[820]: status update time (244.601 seconds)
My guess is that pvestatd was just taking so long that the data it was collecting didn't make it into storage.
The reason that operation was taking a long time appears to have been that I've recently started running iSCSI to some Synology boxes, and Synology's DSM has some behaviour that causes the iSCSI stack in Proxmox some trouble: it advertises each target on every single IP address it possesses, then refuses to permit duplicate logins on the different addresses that all resolve to the same interface. The client then gets into a long loop trying to perform those redundant logins every few seconds until it times out. Everything works, but UI operations are extremely slow or may appear to fail. Interestingly, VMware ESXi has never had a problem with this weird behaviour but perhaps they have special-cased it.
The fix for this was to disable IPv6 on the Synology boxes entirely, then reboot one of the Proxmox nodes. That seems to be enough to cause the iSCSI stack on all nodes to rediscover the targets but now it only knows about a single, IPv4 address, so everything is happy. My graphs have twelve hours of data in them, the UI is back to being instant and nothing else in my network seems to have been relying on IPv6 to those boxes.
One final thing: I'm not really at all sure that my Proxmox nodes are configured for IPv6 at all. I'm still experimenting here and don't know what to expect. But I would expect that lacking connectivity would have helped, not hindered, if anything. Maybe not.
Control Panel > Network > Network Interface > LAN 1 > Edit > IPv6 > IPv6 Setup: Off, then OKWhere exactly did you disable IPv6 on Synology?
Thanks for the quick response. I have only 2 tabs in the settings of all local interfaces: IPv4 and 802.1X.Control Panel > Network > Network Interface > LAN 1 > Edit > IPv6 > IPv6 Setup: Off, then OK
I didn't need to restart the Synology. I did restart one of my Proxmox nodes. Your mileage may vary, some people had other experiences above I think.
Thank you! Solved the issue!I have been having this problem (empty graphs, nothing but timestamps in the "pvesh get /nodes/xxx/rrddata" output) as well. I fixed it in my case, but the underlying cause may have been different so to help other people coming this way I'll summarise.
In my case, as well as the graph issues I was also seeing a lot of UI connection failures and the syslog output had sections like this:
Dec 20 21:02:06 pve02 iscsid[923]: connection-1:0 cannot make a connection to 2a02:redacted:3260 (-1,101) Dec 20 21:02:06 pve02 iscsid[923]: connection-1:0 cannot make a connection to fe80::redacted:3260 (-1,22) Dec 20 21:02:08 pve02 pvestatd[820]: command '/usr/bin/iscsiadm --mode node --targetname iqn.redacted --login' failed: exit code 15 Dec 20 21:02:08 pve02 pvestatd[820]: status update time (244.601 seconds)
My guess is that pvestatd was just taking so long that the data it was collecting didn't make it into storage.
The reason that operation was taking a long time appears to have been that I've recently started running iSCSI to some Synology boxes, and Synology's DSM has some behaviour that causes the iSCSI stack in Proxmox some trouble: it advertises each target on every single IP address it possesses, then refuses to permit duplicate logins on the different addresses that all resolve to the same interface. The client then gets into a long loop trying to perform those redundant logins every few seconds until it times out. Everything works, but UI operations are extremely slow or may appear to fail. Interestingly, VMware ESXi has never had a problem with this weird behaviour but perhaps they have special-cased it.
The fix for this was to disable IPv6 on the Synology boxes entirely, then reboot one of the Proxmox nodes. That seems to be enough to cause the iSCSI stack on all nodes to rediscover the targets but now it only knows about a single, IPv4 address, so everything is happy. My graphs have twelve hours of data in them, the UI is back to being instant and nothing else in my network seems to have been relying on IPv6 to those boxes.
One final thing: I'm not really at all sure that my Proxmox nodes are configured for IPv6 at all. I'm still experimenting here and don't know what to expect. But I would expect that lacking connectivity would have helped, not hindered, if anything. Maybe not.
In version 8.0.x (I think version 8.0.6), I did not have such a problem. It was only when we upgraded to 8.1.3 that this problem appeared. Of course, you can say something about Synology, but for 6 years of the existence of Synology and Proxmox VE, we have this problem only on version PVE 8.1.3. It's too bad that we don't get answers from moderators (technical support) who have direct access to the developers. =(Same Problem and I'm also using Synology iSCSI so probably the cause is exactly the same. However, I make use of Synology IPv6 in my LAN so I cannot simply disable it.
Is there a way to work around this (or even solve this) on the Proxmox side? @Moayad or anybody else from the Proxmox stuff do you maybe have an idea? It worked fine before 8.1 (or 8.0, I'm not exactly sure here) and there was no config change for a long time so something in Proxmox 8(.1) must have caused this...
Hi,
Can the issue be reproduced in a test PVE lab? If yes - please, could you please provide us the steps?
That can help us to know where is the issue since I don't have seen any issue during the upgrade from Proxmox VE 7.x to Proxmox VE 8.x.
iscsi: Synology-iSCSI
portal 10.0.0.100
target iqn.2000-01.com.synology:Synology.PXE-Target-1.1111111111
content images
service iscsid restart
followed by service rrdcached restart
and the graphs instantly started working again. Still not a real solution but fine as long as one doesn't need IPv6 on the iSCSI NIC ( :As Asano said, it's enough to set up a PVE cluster and add Synology storage (iscsi). After that, there will be such a problem. Maybe tech support already has something in the knowledge base of this problem? =(Hi,
Can the issue be reproduced in a test PVE lab? If yes - please, could you please provide us the steps?
That can help us to know where is the issue since I don't have seen any issue during the upgrade from Proxmox VE 7.x to Proxmox VE 8.x.
Unfortunately I don't have spare hardware at my hand to try. But judging from this thread it should be as easy as setting up a Proxmox cluster and adding a Synology iSCSI storage like so:
Code:iscsi: Synology-iSCSI portal 10.0.0.100 target iqn.2000-01.com.synology:Synology.PXE-Target-1.1111111111 content images
Also like alexkenon confirmed the problem did not exist in 8.0. So it came somewhere from 8.0.x to 8.1.x.
I can also confirm that disabling IPv6 on the Synology works because luckily I run iSCSI over a dedicated 10Gbit fiber NIC on the Synology and here I actually don't need IPv6. So I disabled it just on that NIC and ranservice iscsid restart
followed byservice rrdcached restart
and the graphs instantly started working again. Still not a real solution but fine as long as one doesn't need IPv6 on the iSCSI NIC ( :