Proxmox VE 7.0 released!

Just ran in to a problem with LXCs with Proxmox VE v7.0-10, when I logout from an LXC from the console, the login prompt does not return as it used to. I just get a blank screen. I've tried with Firefox and Chrome. Any Ideas?
 
Just ran in to a problem with LXCs with Proxmox VE v7.0-10, when I logout from an LXC from the console, the login prompt does not return as it used to. I just get a blank screen. I've tried with Firefox and Chrome. Any Ideas?
How do you logout exactly, sending EOF with CTRL+D or logout , and what distro(s) is this happening on?

What happens if you press enter a few times at the blank screen?
 
How do you logout exactly, sending EOF with CTRL+D or logout , and what distro(s) is this happening on?

What happens if you press enter a few times at the blank screen?
I use logout as I always have.
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
when I press enter a few times now, nothing happens except the cursor moves down on the blank screen, login prompt does not return.
I get the same result with 'exit' instead of 'logout' which also worked before upgrade to PVE v7...

Has the default cmode always been tty? I found by changing cmode to 'console' that the GUI console now acts as it did before the upgrade.
Is there a way to set 'console' as the default cmode for all LXCs or is this an indication of some other underlying problem?

1627043669092.png
 
Last edited:
I have an Asrock 4x4 4800U with an NVME drive and about 10 Ubuntu VMs. My VM backups are stored on a Synology 1817+ running the latest DSM 7. Both devices use LACP. Connection is SMB 3.0.

After upgrading my Proxmox Host from latest 6.4 to 7.0-10 (pve-no-subscription repo), this smb-connection failed. Error message is "unable to activate storage 'Synology' - directory '/mnt/pve/Synology' does not exist or is unreachable (500)"
I tried the following:
  • 1. Adding a new CIFS storage through GUI: works, but loses the connection after 5-10sec
  • 2. Adding a new CIFS storage through CLI with pvesm: works, but loses the connection after 5-10sec / tried smb 2.0/2.1/3.0.
  • 3. Testing the connection with smbclient: works
  • 4. Testing the connection with pvesm scan cifs: works
  • 5. Increasing the timout according to this post did not work: https://forum.proxmox.com/threads/cifs-storage-is-not-online-500.44983/post-377878
  • 6. When i delete the smb cache on the synology, the connection works again for about 5-10sec, tried mounting the share with mount and option "cache=none" and then adding the directory as a local directory through the web gui didn't work (and crashed the gui). Needed a forced reboot.
  • 7. Created a new account on the synology and gave it less restricted permissions on the share, worked for about 5-10sec, then crashed.
  • 8. Switching to NFS got me a similar error, loses the connection after 5-10sec.
  • 9. Removed all network cables except one on the proxmox host.
I'm not sure what to try next, maybe i'll disable the network bond and go back to a single NIC, even though it worked perfectly on 6.4. Any other ideas?

Thanks for your help!
 
Last edited:
I just tried to install 7.0 on a dell poweredge R730 and I got this error on boot of proxmox from a raid H730. Xeon E5-2620 v4

[Tue Nov 13 14:35:35 2018] Uhhuh. NMI received for unknown reason 21 on CPU 84.
[Tue Nov 13 14:35:35 2018] Do you have a strange power saving mode enabled?
[Tue Nov 13 14:35:35 2018] Dazed and confused, but trying to continue

is this a knon issues?
 
So is 7.0 now the recommended version if installing new, or would it be advised to use 6.4 until 7x and also Debian Bullseye are more mature?
 
So is 7.0 now the recommended version if installing new
Yes.
or would it be advised to use 6.4 until 7x and also Debian Bullseye are more mature?
The base of 7.0 is quite stable, 7.1 will probably get a new QEMU and new Kernel version, so then some problems could be fixed some problems will appear, as always we'll try to address them as best as possible.

Debian Bullseye is hard frozen since mid March, there's nothing much that comes along now, only small updates that would also get released anyway through the security and updates repositories.
 
I just tried to install 7.0 on a dell poweredge R730 and I got this error on boot of proxmox from a raid H730. Xeon E5-2620 v4

[Tue Nov 13 14:35:35 2018] Uhhuh. NMI received for unknown reason 21 on CPU 84.
[Tue Nov 13 14:35:35 2018] Do you have a strange power saving mode enabled?
[Tue Nov 13 14:35:35 2018] Dazed and confused, but trying to continue

is this a knon issues?
Somewhat, but it's not new to 7.0 but rather happened also with past releases, or better said, their kernel versions.
But often the system can continue quite normally, is that not the case in your situation?

Ensure you installed the latest BIOS/Firmware updates, and checking the power save configuration (c-states) in the BIOS/FW settings could be a good idea too.
 
  • Like
Reactions: gstlouis
I use logout as I always have.
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
when I press enter a few times now, nothing happens except the cursor moves down on the blank screen, login prompt does not return.
I get the same result with 'exit' instead of 'logout' which also worked before upgrade to PVE v7...
But what does run in the Container, distro and version? As the container "OS" is responsible for spawning getty's on all consoles.

Has the default cmode always been tty?
yes.

Is there a way to set 'console' as the default cmode for all LXCs or is this an indication of some other underlying problem?
Not integrated, but you can easily script that, for example with pct set $CTID --cmode console.
It would seem that like your CT OS does not automatically respawns a getty on the /dev/ttyX devices, but it does on the /dev/console one, depending on distro and version that can be often simply adapted. Why it broke now would need some closer investigation, and the info about what OS runs in the CTs.

Please, open a new thread for that though, if you want to try to investigate it would make sense to not do so in the main 7.0 release thread, thanks!
 
  • Like
Reactions: psionic
I just tried to install 7.0 on a dell poweredge R730 and I got this error on boot of proxmox from a raid H730. Xeon E5-2620 v4

[Tue Nov 13 14:35:35 2018] Uhhuh. NMI received for unknown reason 21 on CPU 84.
[Tue Nov 13 14:35:35 2018] Do you have a strange power saving mode enabled?
[Tue Nov 13 14:35:35 2018] Dazed and confused, but trying to continue

is this a knon issues?
funny enough, this ended up being a 4port NIC card in the system, it's labelled made for IBM by intel and once I removed it I was able to boot up proxmox no problem.
 
Somewhat, but it's not new to 7.0 but rather happened also with past releases, or better said, their kernel versions.
But often the system can continue quite normally, is that not the case in your situation?

Ensure you installed the latest BIOS/Firmware updates, and checking the power save configuration (c-states) in the BIOS/FW settings could be a good idea too.
funny enough, this ended up being a 4port NIC card in the system, it's labelled made for IBM by intel and once I removed it I was able to boot up proxmox no problem.
 
PVE7 update story,

Hello everyone,

System: HomeLab Dell R720xd all firmware updated to latest and PVE booting from 2 Samsung ssd's in zfs mirror, started almost 2 years ago with PVE 5.x and was on latest 6.4.x managed by one linux noob.

I first read some posts to find out the problems users encountered updating their PVE. Things I noticed: needed uefi for zpool upgrade and was booting via bios, managed to change to uefi in server bios setup after noticing the uefi 512mb partitions already existed and removed all PXE options in settings. Server booted just fine in uefi mode.

Ran pve6to7 --full and fixed all warnings till there were none, ran zpool upgrade and reboot worked. Added hwaddress to bridge interface. Changed lists to point to bullseye and proceeded to update to 7. Everything seems to have updated, rebooted, started VM and CT and everything came up as expected.

At first, I thought this would be a daunting task and was afraid of loosing the install and maybe do a fresh clean install. Was specially weary of losing the tank pool that contains my Plex media.

Special thanks to the engineers for making this a pleasant experience!
 
Hello, I have some problems after update, osds sometime came back down and out:
Code:
2021-08-01T07:55:38.718+0200 7f35a7603f00 -1 bluestore(/var/lib/ceph/osd/ceph-22/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-22/block: (13) Permission denied
2021-08-01T07:55:38.718+0200 7f35a7603f00  1 bluestore(/var/lib/ceph/osd/ceph-22) _mount path /var/lib/ceph/osd/ceph-22
2021-08-01T07:55:38.718+0200 7f35a7603f00  0 bluestore(/var/lib/ceph/osd/ceph-22) _open_db_and_around read-only:0 repair:0
2021-08-01T07:55:38.718+0200 7f35a7603f00 -1 bluestore(/var/lib/ceph/osd/ceph-22/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-22/block: (13) Permission denied
2021-08-01T07:55:38.718+0200 7f35a7603f00  1 bdev(0x55f357306400 /var/lib/ceph/osd/ceph-22/block) open path /var/lib/ceph/osd/ceph-22/block
2021-08-01T07:55:38.718+0200 7f35a7603f00 -1 bdev(0x55f357306400 /var/lib/ceph/osd/ceph-22/block) open open got: (13) Permission denied
2021-08-01T07:55:38.718+0200 7f35a7603f00 -1 osd.22 0 OSD:init: unable to mount object store
2021-08-01T07:55:38.718+0200 7f35a7603f00 -1  ** ERROR: osd init failed: (13) Permission denied
2021-08-01T08:25:13.658+0200 7f8ef067af00  0 set uid:gid to 64045:64045 (ceph:ceph)
2021-08-01T08:25:13.658+0200 7f8ef067af00  0 ceph version 16.2.5 (9b9dd76e12f1907fe5dcc0c1fadadbb784022a42) pacific (stable), process ceph-osd, pid 76003
2021-08-01T08:25:13.658+0200 7f8ef067af00  0 pidfile_write: ignore empty --pid-file
2021-08-01T08:25:13.662+0200 7f8ef067af00  1 bdev(0x564ef318a800 /var/lib/ceph/osd/ceph-22/block) open path /var/lib/ceph/osd/ceph-22/block
2021-08-01T08:25:13.662+0200 7f8ef067af00  1 bdev(0x564ef318a800 /var/lib/ceph/osd/ceph-22/block) open size 960091197440 (0xdf89e51000, 894 GiB) block_size 4096 (4 KiB) non-rotational discard supported
2021-08-01T08:25:13.662+0200 7f8ef067af00  1 bluestore(/var/lib/ceph/osd/ceph-22) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2021-08-01T08:25:13.662+0200 7f8ef067af00  1 bdev(0x564ef318ac00 /var/lib/ceph/osd/ceph-22/block) open path /var/lib/ceph/osd/ceph-22/block
2021-08-01T08:25:13.666+0200 7f8ef067af00  1 bdev(0x564ef318ac00 /var/lib/ceph/osd/ceph-22/block) open size 960091197440 (0xdf89e51000, 894 GiB) block_size 4096 (4 KiB) non-rotational discard supported
2021-08-01T08:25:13.666+0200 7f8ef067af00  1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-22/block size 894 GiB
2021-08-01T08:25:13.666+0200 7f8ef067af00  1 bdev(0x564ef318ac00 /var/lib/ceph/osd/ceph-22/block) close
2021-08-01T08:25:13.986+0200 7f8ef067af00  1 bdev(0x564ef318a800 /var/lib/ceph/osd/ceph-22/block) close
2021-08-01T08:25:14.234+0200 7f8ef067af00  0 starting osd.22 osd_data /var/lib/ceph/osd/ceph-22 /var/lib/ceph/osd/ceph-22/journal
2021-08-01T08:25:14.262+0200 7f8ef067af00  0 load: jerasure load: lrc load: isa
2021-08-01T08:25:14.262+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open path /var/lib/ceph/osd/ceph-22/block
2021-08-01T08:25:14.262+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open size 960091197440 (0xdf89e51000, 894 GiB) block_size 4096 (4 KiB) non-rotational discard supported
2021-08-01T08:25:14.262+0200 7f8ef067af00  1 bluestore(/var/lib/ceph/osd/ceph-22) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2021-08-01T08:25:14.262+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) close
2021-08-01T08:25:14.570+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open path /var/lib/ceph/osd/ceph-22/block
2021-08-01T08:25:14.570+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open size 960091197440 (0xdf89e51000, 894 GiB) block_size 4096 (4 KiB) non-rotational discard supported
2021-08-01T08:25:14.570+0200 7f8ef067af00  1 bluestore(/var/lib/ceph/osd/ceph-22) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2021-08-01T08:25:14.570+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) close
2021-08-01T08:25:14.894+0200 7f8ef067af00  0 osd.22:0.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2021-08-01T08:25:14.894+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open path /var/lib/ceph/osd/ceph-22/block
2021-08-01T08:25:14.894+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open size 960091197440 (0xdf89e51000, 894 GiB) block_size 4096 (4 KiB) non-rotational discard supported
2021-08-01T08:25:14.894+0200 7f8ef067af00  1 bluestore(/var/lib/ceph/osd/ceph-22) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2021-08-01T08:25:14.894+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) close
2021-08-01T08:25:15.226+0200 7f8ef067af00  0 osd.22:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2021-08-01T08:25:15.226+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open path /var/lib/ceph/osd/ceph-22/block
2021-08-01T08:25:15.226+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open size 960091197440 (0xdf89e51000, 894 GiB) block_size 4096 (4 KiB) non-rotational discard supported
2021-08-01T08:25:15.226+0200 7f8ef067af00  1 bluestore(/var/lib/ceph/osd/ceph-22) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2021-08-01T08:25:15.226+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) close
2021-08-01T08:25:15.554+0200 7f8ef067af00  0 osd.22:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=196)
2021-08-01T08:25:15.554+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open path /var/lib/ceph/osd/ceph-22/block
2021-08-01T08:25:15.554+0200 7f8ef067af00  1 bdev(0x564ef3e54400 /var/lib/ceph/osd/ceph-22/block) open size 960091197440 (0xdf89e51000, 894 GiB) block_size 4096 (4 KiB) non-rotational discard supported
Anybody had experience something similar?
Sometimes I need to destroy osd and recreate it and sometimes you can start it without ploblem, where 20 minutes before same osd cannot start cause permissions problem.

Regards
 
Last edited:
@Mikepop can you please create a new thread for that? If you do, please also run a ls -l /dev/<OSD disk>* and post that output?
 
It looks like this is changed with the cpupower tool:
Code:
cpupower frequency-set -g SCHEDULER
# Examples
cpupower frequency-set -g performance
cpupower frequency-set -g schedutil
# apt-get install cpupower
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package cpupower

*cries*

where did you get the tool?
 
# apt-get install cpupower
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package cpupower

*cries*

where did you get the tool?
Code:
root@stor01:~# apt search cpupower
Sorting... Done
Full Text Search... Done
cpupower-gui/testing,testing 0.7.2-2 amd64
  GUI utility to change the CPU frequency


libcpupower-dev/testing,testing 5.10.46-4 amd64
  CPU frequency and voltage scaling tools for Linux (development files)


libcpupower1/testing,testing,now 5.10.46-4 amd64 [installed,automatic]
  CPU frequency and voltage scaling tools for Linux (libraries)


linux-cpupower/testing,testing,now 5.10.46-4 amd64 [installed,automatic]
  CPU power management tools for Linux

Run apt update && apt install linux-cpupower
 
  • Like
Reactions: chrcoluk
Code:
root@stor01:~# apt search cpupower
Sorting... Done
Full Text Search... Done
cpupower-gui/testing,testing 0.7.2-2 amd64
  GUI utility to change the CPU frequency


libcpupower-dev/testing,testing 5.10.46-4 amd64
  CPU frequency and voltage scaling tools for Linux (development files)


libcpupower1/testing,testing,now 5.10.46-4 amd64 [installed,automatic]
  CPU frequency and voltage scaling tools for Linux (libraries)


linux-cpupower/testing,testing,now 5.10.46-4 amd64 [installed,automatic]
  CPU power management tools for Linux

Run apt update && apt install linux-cpupower

On my local ryzen machine I set the min clock to base clock speed as I been fighting kvm kernel module pagefaults that occurred on low or idle loads. Disabling all c-states kept power draw too high, so did this instead.

Using the tool interestingly performance is better at schedutil vs performance, it seems to turbo up quicker. But I expect schedutil without the min speed held high might take longer to ramp out of idle clocks hence the experience on here.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!