Please open a fresh thread for specific problems!I installed fresh Proxmox ve v6 but import zfs pool from 5.4. When run zpool status -t it show "trim unsupported" on my zfs pool1 consist of Crucial MX500 SSD. Why not supported?
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve) pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7) pve-kernel-5.0: 6.0-6 pve-kernel-helper: 6.0-6 pve-kernel-4.15: 5.4-6 pve-kernel-5.0.18-1-pve: 5.0.18-3 pve-kernel-5.0.15-1-pve: 5.0.15-1 pve-kernel-4.15.18-18-pve: 4.15.18-44 pve-kernel-4.15.18-12-pve: 4.15.18-36 ceph: 14.2.1-pve2 ceph-fuse: 14.2.1-pve2 corosync: 3.0.2-pve2 criu: 3.11-3 glusterfs-client: 5.5-3 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.10-pve2 libpve-access-control: 6.0-2 libpve-apiclient-perl: 3.0-2 libpve-common-perl: 6.0-3 libpve-guest-common-perl: 3.0-1 libpve-http-server-perl: 3.0-2 libpve-storage-perl: 6.0-7 libqb0: 1.0.5-1 lvm2: 2.03.02-pve3 lxc-pve: 3.1.0-63 lxcfs: 3.0.3-pve60 novnc-pve: 1.0.0-60 openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.0-5 pve-cluster: 6.0-5 pve-container: 3.0-5 pve-docs: 6.0-4 pve-edk2-firmware: 2.20190614-1 pve-firewall: 4.0-7 pve-firmware: 3.0-2 pve-ha-manager: 3.0-2 pve-i18n: 2.0-2 pve-qemu-kvm: 4.0.0-5 pve-xtermjs: 3.13.2-1 qemu-server: 6.0-7 smartmontools: 7.0-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.1-pve1
Check the logs (especially for messages from corosync and pve-cluster/pmxcfs) - if this does not lead to a solution please open a new thread (with the logs attached/pasted in code-tags) - Thanks!After upgrading to 6.0, most part of VMs started going to HA error status on different hosts every 2-3 days
Yes, if your network backing corosync is stable and not loaded to much - which should be the case, as else the current one would show issues already. But still, test it first, e.g., in a (virtual) test setup.I was thinking to update Corosync to v3 on all nodes first (at the same time), this will keep everything running right?
Do not update Proxmox VE and Ceph in one go. First Proxmox VE, and only then, once PVE was update to 6.0 in the cluster, all nodes restarted, all healthy for a bit do the update from Ceph Luminous to Ceph Nautilus.Then start the Proxmox update per node incl Ceph, updating Ceph feels a bit dangerous.
Just be sure to follow our Upgrade Guide: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#In-place_upgradeIs this the best way to follow? What do you recommend to do if we are talking about a large number of nodes.
My experience with the upgrade was very smooth, the current 6.0 (6.0-7) is very stable on our cluster. A "lazy sysadmin" can freeze it and forget it for a while.just curious i have a proxmox on 5.4 not yet on cluster the idea is to upgrade to 6.0 because cannot get ZFS to boot im guessing because it running HP smart array P440ar on HBA mode (even disabled uefi) but saw on 6.0 it boots up on uefi my question is how stable is it? going to try out this week and if it works how stable or recomended to combine it on a cluster with 5.4 on the other hosts?
PVE 6 is based on debian buster - not stretch!