Proxmox VE 7.0 released!

Not sure if this is a proxmox 7 thing or not, but I thought I would post it here. What does it mean when the Wearout indicator is a negative number? I haven't seen that before. Running VE version 7.0-10

20210717_223241.jpg
 
Not sure if this is a proxmox 7 thing or not, but I thought I would post it here. What does it mean when the Wearout indicator is a negative number? I haven't seen that before. Running VE version 7.0-10
can you provide the output of smartcl for that drive ? (i.e. the attributes)
 
Thank you @dcsapak. Here is the result of my smartctl investigation:

Code:
# smartctl -t short /dev/sdc
# smartctl -a /dev/sdc
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.11.22-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Apple SD/SM/TS...E/F/G SSDs
Device Model:     APPLE SSD SM1024G
Serial Number:    redacted
LU WWN Device Id: 5 002538 900000000
Firmware Version: BXW1SA0Q
User Capacity:    1,000,555,581,440 bytes [1.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
TRIM Command:     Available
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 4c
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Jul 19 20:30:34 2021 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)    Offline data collection activity
                    was never started.
                    Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:         (    0) seconds.
Offline data collection
capabilities:              (0x53) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    No Offline surface scan supported.
                    Self-test supported.
                    No Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   2) minutes.
Extended self-test routine
recommended polling time:      (  10) minutes.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x001a   200   200   000    Old_age   Always       -       0
  5 Reallocated_Sector_Ct   0x0033   100   100   000    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   093   093   000    Old_age   Always       -       33972
 12 Power_Cycle_Count       0x0032   084   084   000    Old_age   Always       -       15802
169 Unknown_Apple_Attrib    0x0013   243   243   010    Pre-fail  Always       -       7018328887040
173 Wear_Leveling_Count     0x0032   171   171   100    Old_age   Always       -       2796081840867
174 Host_Reads_MiB          0x0022   099   099   000    Old_age   Always       -       220637518
175 Host_Writes_MiB         0x0022   099   099   000    Old_age   Always       -       114628728
192 Power-Off_Retract_Count 0x0012   099   099   000    Old_age   Always       -       78
194 Temperature_Celsius     0x0022   061   021   000    Old_age   Always       -       39 (Min/Max 2/79)
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
199 UDMA_CRC_Error_Count    0x001a   200   199   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

Warning! SMART Self-Test Log Structure error: invalid SMART checksum.
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short captive       Completed without error       00%     33972         -
# 2  Short offline       Completed without error       00%     33972         -
# 3  Short offline       Completed without error       00%     33972         -

Warning! SMART Selective Self-Test Log Structure error: invalid SMART checksum.
SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
  255        0    65535  Read_scanning was never started
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
 
Last edited:
After upgrading to version 7 I get an error message with the "pct cpusets" command.

Bash:
root@pve:~# pct cpusets
Use of uninitialized value $last in numeric le (<=) at /usr/share/perl5/PVE/CLI/pct.pm line 733.
Use of uninitialized value $last in numeric le (<=) at /usr/share/perl5/PVE/CLI/pct.pm line 733.
---------------
100:    1 2
102:        3 4
103:
---------------
root@pve:~#
 
After upgrading to version 7 I get an error message with the "pct cpusets" command.

Bash:
root@pve:~# pct cpusets
Use of uninitialized value $last in numeric le (<=) at /usr/share/perl5/PVE/CLI/pct.pm line 733.
Use of uninitialized value $last in numeric le (<=) at /usr/share/perl5/PVE/CLI/pct.pm line 733.
---------------
100:    1 2
102:        3 4
103:
---------------
root@pve:~#
I can reproduce that here, we'll look into a fix. Thanks for the report!
 
  • Like
Reactions: alexandere
Since proxmox version 7 I get annoying storage problems. When clicking on the individual volumes in the dashboard, either only "loading", "communication failure" or "connection timed out" is displayed. With Proxmox VE 6.4 everything worked fine. Due to this "error" no virtual machines or containers can be created, here also the errors described before are displayed.

Maybe someone has already found a solution.

Screenshot_20210720_112004.png Screenshot_20210720_112320.pngScreenshot_20210720_112314.png
Screenshot_20210720_111908.png
Screenshot_20210720_115558.png
Screenshot_20210720_120741.png

It seems that recently not only memory problems occur.
 
Last edited:
@need2gcm The hard disks seem to be okay, I just checked the S.M.A.R.T values. Furthermore, the problems occur with three hard drives, two of which are just half a year old, so it must depend on proxmox...
 
Well regardless, I suggest starting your diagnostics with vda1 since it is having I/O errors and buffer issues. It is likely what is causing both your storage issues and the massive amount of IO Wait I can see in the CPU graph.

What is vda1?
 
Vda1 is the /dev/sdb1 disk that was passed to a vm. This type of IO errors also occurs only since the upgrade to version 7.0. This is all very strange behavior.
 
Please open a new thread for specific errors that may be completely unrelated to the release itself, and even if they would be related, longer sub-discussions are not ideal for all involved, they crowd this thread here for others and make it easier to miss posts for those who want to help or have similar issues.

FWIW, I'd checkout switching from new io_uring to older aio, as described in the release nodes.
 
Last edited:
After upgrading to 7.0 I had an issue where i used ifupdown2 on one of my nodes and it triggered the other 2 in my cluster to restart their networks as well. is this a bug or something just in my setup? As a framing i was stupid and installed ifupdown2 After i did the upgrade on my 3rd node because it was only installed Prior to the upgrade on nodes 1 + 2
 
Hello, I was able to successfully upgrade from 6.3 to 7. The only issue that I am seeing is that the graphs no longer show the stats. I will add that it was showing for a day but no longer do. I am unsure if it was from an update? Please if anyone has insight, it would be much appreciated!

Please see attached for a screenshot
 

Attachments

  • Screen Shot 2021-07-21 at 2.41.11 PM.zip
    113.7 KB · Views: 9
Painless upgrade from 6.4 to 7.0 for me on four clusters and clean install on a fifth cluster. Thanks for the great work!

One bug I've found is with Task History:
- Cluster-wide Task History is correct.
- Node Task history is correct for the web GUI node I'm logged into, but Task History for all other nodes on the cluster repeats the history for the local node.
- VM Task History is correct for VMs hosted on the node I'm logged into, but VM Task History for VMs on other nodes in the cluster is empty

I therefore have to log into the host node to get history for that node or VMs on that node, whereas on PVE 6.4 (and below) could browse Task History for all nodes and VMs on the cluster from one single web gui login.

This happens on all my clusters.
 
I have a fresh PVE 7 installation for testing and manually adding VLAN interface (vmbr0.1) on the bridge (vmbr0) like I did on PVE 6.3 does not work. Adding "hwaddress" option to them helped (I have not tested if it's needed for both or maybe just one of them), even if our network is not MAC-restricted. Hint: reboot was needed in my case (after testing different network settings).

Well, it looks it's not so simple. Adding "hwaddress" works only for the node - now the node has a network connection, but virtual machines on it do not have it. Could you tell what can be the problem?

VMs have MAC address assigned automatically during creation and they use the same bridge (vmbr0), so it sounds similar enough to the vmbr0.1.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!