Proxmox VE 7.0 (beta) released!

Could you tell us what parameters you had set there? In theory some things like for example moving only a subset of cgroups to v2 *could* work with lxc (but I wouldn't recommend it for production use).
Unfortunately I copied the line from another installation and it worked. As I wasn't concerned about fresh install I didn't focus on a backup.

I'm not absolutely sure but I think it was parameter below... I don't know where it came from. Another installation didn't show that setting. The parameter tried to use cgroups v1. That's the reason why I removed as this is not my preferred configuration.

GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0" -- not working
GRUB_CMDLINE_LINUX="" -- working
 
Unfortunately I copied the line from another installation and it worked. As I wasn't concerned about fresh install I didn't focus on a backup.

I'm not absolutely sure but I think it was parameter below... I don't know where it came from. Another installation didn't show that setting. The parameter tried to use cgroups v1. That's the reason why I removed as this is not my preferred configuration.

GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0" -- not working
GRUB_CMDLINE_LINUX="" -- working
Both should work fine though. Have you by any chance been using systemd from backports? For the non-bpo version that boot option should have been the default anyway, with the version the unified one is the default, but the old setting should still be fully functional.
 
Can anyone elaborate on what you can use BTRFS for now?

I've been using a mirror for BTRFS passed through as two different HDDs to a guest for a long time - so I'm just wondering what the deal would be there?

Can you use Directory for KVM VMs and not have LV's for each VM?

How are the snapshots implemented?

I can't seem to find much of anything about this :(
 
The docs in the PVE 7.0 beta already include BTRFS, if you do not have an installation around you can read most of the info directly from git.proxmox.com
Ah, excellent - thanks :)

I would strongly recommend changing the RAID5/6 part to:
* RAID levels 5/6 are experimental and are known to cause data corruption. Use at your own risk.

Simply as I've been there before when those warnings weren't present, and I'm partly the reason why those warnings are now everywhere......
 
Here I was unable to boot using BTRFS.
Gave me grub error. I have used two 120G disk do install PVE.
Now I reinstall using single disk and works but after boot, I was unable to create BTRFS partitions with Web GUI.
I'll try again use BTRFS as root.
I have try it again and it's works.
But I still cannot create other BTRFS partitions using the others virtual HDD via web gui.
 

Attachments

  • 2021-06-28_11-00.png
    2021-06-28_11-00.png
    55.6 KB · Views: 21
Last edited:
Both should work fine though. Have you by any chance been using systemd from backports? For the non-bpo version that boot option should have been the default anyway, with the version the unified one is the default, but the old setting should still be fully functional.
I tested, attached grub configuration throws the error which I posted earlier. I don't think that I was using backports, at least I don't remember. There was no related entry in my apt-sources.

Host Nuc NUC5i3RYH, migrated from 5 to 6 and now to 7. I found how to solve the issue. I post the grub for information in case you are interested. I don't expect further analysis.

In case you would like to see further files, logs or details ... let me know.
 

Attachments

  • grub.txt
    1.2 KB · Views: 18
LXC template for Centos7 doesn't work, "cgroup2: Unknown parameter mode"
 
I can't get any virtual machines to start on 7.0-5 Beta. The console hangs before the EFI stage begins. See attached image.

Code:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-5 (running version: 7.0-5/cce9b25f)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.4: 6.4-3
pve-kernel-5.11.22-1-pve: 5.11.22-1
pve-kernel-5.11.21-1-pve: 5.11.21-1
pve-kernel-5.11.17-1-pve: 5.11.17-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 16.2.4-pve1
ceph-fuse: 16.2.4-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 7.0-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-6
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-1
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 1.1.10-1
proxmox-backup-file-restore: 1.1.10-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.1-4
pve-cluster: 7.0-2
pve-container: 4.0-3
pve-docs: 7.0-3
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.2-2
pve-i18n: 2.3-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-5
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1

Code:
agent: 1,fstrim_cloned_disks=1
balloon: 0
bios: ovmf
boot: order=scsi0
cores: 28
cpu: host
cpuunits: 101
efidisk0: CephRBD:vm-111-disk-0,size=1M
machine: pc-q35-6.0
memory: 24576
name: encoder
net0: virtio=00:a0:98:7f:85:c8,bridge=vmbr1,firewall=1
numa: 1
ostype: win10
scsi0: CephRBD:vm-111-disk-1,cache=writeback,discard=on,iothread=1,size=153601M,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=3fe0d48d-0709-4e0c-99b6-8f8bcfb658f0
sockets: 2
tablet: 0
vga: std,memory=32
vmgenid: 1acdd3ec-cc43-4bcd-b739-05202257a5b2
vmstatestorage: CephRBD


Edit: Bug report filed - https://bugzilla.proxmox.com/show_bug.cgi?id=3498
 

Attachments

  • Screen Shot 2021-06-29 at 7.13.09 AM.png
    Screen Shot 2021-06-29 at 7.13.09 AM.png
    88.5 KB · Views: 18
Last edited:
Ceph Pacific introduced new RocksDB Sharding. Attempts to reshard an OSD using Ceph Pacific on Proxmox 7.0-5 Beta results in the corruption of the OSD, requiring the OSD's deletion and a backfilling. The OSD can't be restarted or repaired after the failed reshard.

Code:
root@viper:~# ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-27 --sharding="m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P" reshard
2021-06-29T07:39:40.949-0500 7f54703dd240 -1 rocksdb: prepare_for_reshard failure parsing column options: block_cache={type=binned_lru}
ceph-bluestore-tool: /build/ceph/ceph-16.2.4/src/rocksdb/db/column_family.cc:1387: rocksdb::ColumnFamilySet::~ColumnFamilySet(): Assertion `last_ref' failed.
*** Caught signal (Aborted) **
 in thread 7f54703dd240 thread_name:ceph-bluestore-
 ceph version 16.2.4 (a912ff2c95b1f9a8e2e48509e602ee008d5c9434) pacific (stable)
 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f5470aa5140]
 2: gsignal()
 3: abort()
 4: /lib/x86_64-linux-gnu/libc.so.6(+0x2540f) [0x7f54705be40f]
 5: /lib/x86_64-linux-gnu/libc.so.6(+0x34662) [0x7f54705cd662]
 6: (rocksdb::ColumnFamilySet::~ColumnFamilySet()+0x82) [0x55ec0217fb36]
 7: (std::default_delete<rocksdb::ColumnFamilySet>::operator()(rocksdb::ColumnFamilySet*) const+0x22) [0x55ec01fd699c]
 8: (std::__uniq_ptr_impl<rocksdb::ColumnFamilySet, std::default_delete<rocksdb::ColumnFamilySet> >::reset(rocksdb::ColumnFamilySet*)+0x5b) [0x55ec01fd6de5]
 9: (std::unique_ptr<rocksdb::ColumnFamilySet, std::default_delete<rocksdb::ColumnFamilySet> >::reset(rocksdb::ColumnFamilySet*)+0x2f) [0x55ec01fd08f5]
 10: (rocksdb::VersionSet::~VersionSet()+0x4f) [0x55ec01fb6ff9]
 11: (rocksdb::VersionSet::~VersionSet()+0x18) [0x55ec01fb7170]
 12: (std::default_delete<rocksdb::VersionSet>::operator()(rocksdb::VersionSet*) const+0x28) [0x55ec01e68d64]
 13: (std::__uniq_ptr_impl<rocksdb::VersionSet, std::default_delete<rocksdb::VersionSet> >::reset(rocksdb::VersionSet*)+0x5b) [0x55ec01e6ac81]
 14: (std::unique_ptr<rocksdb::VersionSet, std::default_delete<rocksdb::VersionSet> >::reset(rocksdb::VersionSet*)+0x2f) [0x55ec01e5bef5]
 15: (rocksdb::DBImpl::CloseHelper()+0xa12) [0x55ec01e27414]
 16: (rocksdb::DBImpl::~DBImpl()+0x4e) [0x55ec01e2784a]
 17: (rocksdb::DBImpl::~DBImpl()+0x18) [0x55ec01e27bfa]
 18: (RocksDBStore::close()+0x355) [0x55ec01dfc9a5]
 19: (RocksDBStore::reshard(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, RocksDBStore::resharding_ctrl const*)+0x231) [0x55ec01e03ec1]
 20: main()
 21: __libc_start_main()
 22: _start()
2021-06-29T07:39:40.965-0500 7f54703dd240 -1 *** Caught signal (Aborted) **
 in thread 7f54703dd240 thread_name:ceph-bluestore-

 ceph version 16.2.4 (a912ff2c95b1f9a8e2e48509e602ee008d5c9434) pacific (stable)
 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f5470aa5140]
 2: gsignal()
 3: abort()
 4: /lib/x86_64-linux-gnu/libc.so.6(+0x2540f) [0x7f54705be40f]
 5: /lib/x86_64-linux-gnu/libc.so.6(+0x34662) [0x7f54705cd662]
 6: (rocksdb::ColumnFamilySet::~ColumnFamilySet()+0x82) [0x55ec0217fb36]
 7: (std::default_delete<rocksdb::ColumnFamilySet>::operator()(rocksdb::ColumnFamilySet*) const+0x22) [0x55ec01fd699c]
 8: (std::__uniq_ptr_impl<rocksdb::ColumnFamilySet, std::default_delete<rocksdb::ColumnFamilySet> >::reset(rocksdb::ColumnFamilySet*)+0x5b) [0x55ec01fd6de5]
 9: (std::unique_ptr<rocksdb::ColumnFamilySet, std::default_delete<rocksdb::ColumnFamilySet> >::reset(rocksdb::ColumnFamilySet*)+0x2f) [0x55ec01fd08f5]
 10: (rocksdb::VersionSet::~VersionSet()+0x4f) [0x55ec01fb6ff9]
 11: (rocksdb::VersionSet::~VersionSet()+0x18) [0x55ec01fb7170]
 12: (std::default_delete<rocksdb::VersionSet>::operator()(rocksdb::VersionSet*) const+0x28) [0x55ec01e68d64]
 13: (std::__uniq_ptr_impl<rocksdb::VersionSet, std::default_delete<rocksdb::VersionSet> >::reset(rocksdb::VersionSet*)+0x5b) [0x55ec01e6ac81]
 14: (std::unique_ptr<rocksdb::VersionSet, std::default_delete<rocksdb::VersionSet> >::reset(rocksdb::VersionSet*)+0x2f) [0x55ec01e5bef5]
 15: (rocksdb::DBImpl::CloseHelper()+0xa12) [0x55ec01e27414]
 16: (rocksdb::DBImpl::~DBImpl()+0x4e) [0x55ec01e2784a]
 17: (rocksdb::DBImpl::~DBImpl()+0x18) [0x55ec01e27bfa]
 18: (RocksDBStore::close()+0x355) [0x55ec01dfc9a5]
 19: (RocksDBStore::reshard(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, RocksDBStore::resharding_ctrl const*)+0x231) [0x55ec01e03ec1]
 20: main()
 21: __libc_start_main()
 22: _start()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

  -980> 2021-06-29T07:39:40.949-0500 7f54703dd240 -1 rocksdb: prepare_for_reshard failure parsing column options: block_cache={type=binned_lru}
  -979> 2021-06-29T07:39:40.965-0500 7f54703dd240 -1 *** Caught signal (Aborted) **
 in thread 7f54703dd240 thread_name:ceph-bluestore-

 ceph version 16.2.4 (a912ff2c95b1f9a8e2e48509e602ee008d5c9434) pacific (stable)
 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f5470aa5140]
 2: gsignal()
 3: abort()
 4: /lib/x86_64-linux-gnu/libc.so.6(+0x2540f) [0x7f54705be40f]
 5: /lib/x86_64-linux-gnu/libc.so.6(+0x34662) [0x7f54705cd662]
 6: (rocksdb::ColumnFamilySet::~ColumnFamilySet()+0x82) [0x55ec0217fb36]
 7: (std::default_delete<rocksdb::ColumnFamilySet>::operator()(rocksdb::ColumnFamilySet*) const+0x22) [0x55ec01fd699c]
 8: (std::__uniq_ptr_impl<rocksdb::ColumnFamilySet, std::default_delete<rocksdb::ColumnFamilySet> >::reset(rocksdb::ColumnFamilySet*)+0x5b) [0x55ec01fd6de5]
 9: (std::unique_ptr<rocksdb::ColumnFamilySet, std::default_delete<rocksdb::ColumnFamilySet> >::reset(rocksdb::ColumnFamilySet*)+0x2f) [0x55ec01fd08f5]
 10: (rocksdb::VersionSet::~VersionSet()+0x4f) [0x55ec01fb6ff9]
 11: (rocksdb::VersionSet::~VersionSet()+0x18) [0x55ec01fb7170]
 12: (std::default_delete<rocksdb::VersionSet>::operator()(rocksdb::VersionSet*) const+0x28) [0x55ec01e68d64]
 13: (std::__uniq_ptr_impl<rocksdb::VersionSet, std::default_delete<rocksdb::VersionSet> >::reset(rocksdb::VersionSet*)+0x5b) [0x55ec01e6ac81]
 14: (std::unique_ptr<rocksdb::VersionSet, std::default_delete<rocksdb::VersionSet> >::reset(rocksdb::VersionSet*)+0x2f) [0x55ec01e5bef5]
 15: (rocksdb::DBImpl::CloseHelper()+0xa12) [0x55ec01e27414]
 16: (rocksdb::DBImpl::~DBImpl()+0x4e) [0x55ec01e2784a]
 17: (rocksdb::DBImpl::~DBImpl()+0x18) [0x55ec01e27bfa]
 18: (RocksDBStore::close()+0x355) [0x55ec01dfc9a5]
 19: (RocksDBStore::reshard(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, RocksDBStore::resharding_ctrl const*)+0x231) [0x55ec01e03ec1]
 20: main()
 21: __libc_start_main()
 22: _start()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

    -3> 2021-06-29T07:39:40.949-0500 7f54703dd240 -1 rocksdb: prepare_for_reshard failure parsing column options: block_cache={type=binned_lru}
     0> 2021-06-29T07:39:40.965-0500 7f54703dd240 -1 *** Caught signal (Aborted) **
 in thread 7f54703dd240 thread_name:ceph-bluestore-

 ceph version 16.2.4 (a912ff2c95b1f9a8e2e48509e602ee008d5c9434) pacific (stable)
 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f5470aa5140]
 2: gsignal()
 3: abort()
 4: /lib/x86_64-linux-gnu/libc.so.6(+0x2540f) [0x7f54705be40f]
 5: /lib/x86_64-linux-gnu/libc.so.6(+0x34662) [0x7f54705cd662]
 6: (rocksdb::ColumnFamilySet::~ColumnFamilySet()+0x82) [0x55ec0217fb36]
 7: (std::default_delete<rocksdb::ColumnFamilySet>::operator()(rocksdb::ColumnFamilySet*) const+0x22) [0x55ec01fd699c]
 8: (std::__uniq_ptr_impl<rocksdb::ColumnFamilySet, std::default_delete<rocksdb::ColumnFamilySet> >::reset(rocksdb::ColumnFamilySet*)+0x5b) [0x55ec01fd6de5]
 9: (std::unique_ptr<rocksdb::ColumnFamilySet, std::default_delete<rocksdb::ColumnFamilySet> >::reset(rocksdb::ColumnFamilySet*)+0x2f) [0x55ec01fd08f5]
 10: (rocksdb::VersionSet::~VersionSet()+0x4f) [0x55ec01fb6ff9]
 11: (rocksdb::VersionSet::~VersionSet()+0x18) [0x55ec01fb7170]
 12: (std::default_delete<rocksdb::VersionSet>::operator()(rocksdb::VersionSet*) const+0x28) [0x55ec01e68d64]
 13: (std::__uniq_ptr_impl<rocksdb::VersionSet, std::default_delete<rocksdb::VersionSet> >::reset(rocksdb::VersionSet*)+0x5b) [0x55ec01e6ac81]
 14: (std::unique_ptr<rocksdb::VersionSet, std::default_delete<rocksdb::VersionSet> >::reset(rocksdb::VersionSet*)+0x2f) [0x55ec01e5bef5]
 15: (rocksdb::DBImpl::CloseHelper()+0xa12) [0x55ec01e27414]
 16: (rocksdb::DBImpl::~DBImpl()+0x4e) [0x55ec01e2784a]
 17: (rocksdb::DBImpl::~DBImpl()+0x18) [0x55ec01e27bfa]
 18: (RocksDBStore::close()+0x355) [0x55ec01dfc9a5]
 19: (RocksDBStore::reshard(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, RocksDBStore::resharding_ctrl const*)+0x231) [0x55ec01e03ec1]
 20: main()
 21: __libc_start_main()
 22: _start()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Aborted


Bug Report - https://bugzilla.proxmox.com/show_bug.cgi?id=3499
 
is this already implemented "Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC."?

my servers all had the same machine id after the update i had to change it manually

that was the only problem with the update everything runs fine so far
 
Last edited:
i get an error message from ceph after the update

Module 'diskprediction_local' has failed: 'SVC' object has no attribute 'break_ties'Module 'diskprediction_local' has failed: 'SVC' object has no attribute 'break_ties'
 
I can't get any virtual machines to start on 7.0-5 Beta. The console hangs before the EFI stage begins. See attached image.

As EFI backed VMs start just fine here it'd be probably better to open a new forum thread, which is a bit friendlier for initial investigation. In general it'd be good to ensure that you can navigate into the OVMF EFI menu, start new newly created VMs with EFI (or try some Linux Live ISO for existing ones) to rule out some things and help with narrowing down a possible reproducer.
 
Ceph Pacific introduced new RocksDB Sharding. Attempts to reshard an OSD using Ceph Pacific on Proxmox 7.0-5 Beta results in the corruption of the OSD, requiring the OSD's deletion and a backfilling. The OSD can't be restarted or repaired after the failed reshard.
Newly created Ceph Pacific cluster would automatically use sharding for the rocksdb already, so I assume this was on an upgraded cluster?
Some more information would be interesting to know, ideally opening a new thread:
  • cluster was 100% HEALTH_OK before that
  • how was the cluster upgraded (i.e., closely following our upgrade how-to?)
  • What was the ceph cluster initially created with, how was it upgraded over time?
  • What type of OSDs are in use, file-store, blue-store, both?
  • Was the rocksdb on its own (separate) device?
  • What was the exact command to trigger the reshard?
Thanks!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!