A quick way to look at the file without all the extensive comments:
grep -vE '^\s*(#|$)' /etc/lvm/lvm.conf
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
multipath_component_detection = 0 needed to be set to 1 in /etc/lvm/lvm.conf - customer set the parameter, which caused this error. Thank you all for your help, I learned alot about iSCSI. Even created my own iSCSI-Target in a debian-vm to test...
Thanks to everyone for the help.
I managed to fix the reboot loop on my Dell PowerEdge R730xd after upgrading to Proxmox VE 9.
The issue was caused by the newest PVE kernel (6.17.x) triggering Machine Check Exceptions on this hardware (older...
Quincy is already EOL upgrade to Reef and then maybe even to squid. The Bug you mentioned is only shown for ceph 18 at least on clysos side. But there are other for ceph 19. I personally dont have any issues with 19.x so far.
17 and 18 are already EOL, why not upgrade to 19?
There is https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid and should be a doc for 18 but I’m not seeing it.
We also did 15 upgrades from ceph 19 to 20 in nested pve-environments (all clean installs from training) all went fine without issues. The only thing which was not 100% clear howto check what mds-services are standby via cli:
We looked at the...
You could also boot the debug mode from the installer and try to capture the logs, the debugging mode also helps to identify on which step exactly in the installer the reboot will be triggerd.
/dev/mapper/mpathx are just dynamic aliases created by multipathd based on order of device discovery, they are not guaranteed to be consistent across hosts or even across reboots. Adding or deleting iSCSI targets will also cause them to be...
"wipefs -a" one of the devices in the group, if you still cant access mpath device. Remove the iSCSI storage pools, remove any nodes/sessions with iscsiadm, reboot the node, optionally remove/re-init the LUNs on SAN side.
run "vgcreate" with...
Someone has a idea to further troubleshoot this? The lvm filter should not be needed - the steps I took match with the guide from proxmox. Is my storage.cfg correct with the 4 iscsi-entries?
iscsi: iscsi
portal 10.10.1.71
target...
Hello check:
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE
https://pve.proxmox.com/wiki/Advanced_Migration_Techniques_to_Proxmox_VE
7 hours sounds wrong but depends on the network you use and how much data you can transfer. But post a...
* EFI Disk via Hardware hinzufügen
* Unter Umständen noch EFI Boot Einträge setzen: https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries
* Boot-Order setzen / kontrollieren
* Windows kann mit Virtio nicht sofort starten, weil es nicht die Treiber...
Yea I wondered about that as well, the "Thin Provision" Option in Datacenter -> Storage -> ZFS is kinda confusing, as in both cases (not enabled, enabled) it is thin-provisioned. ZFS Reservation just wont let you create any new disks if...
depends on the storage.
If you import to ZFS for example and the "thin provision" option is enabled, then it will be thin, as in, zeros won't be written*.
On RBD, you would need to do a trim/discard after the import, as, IIRC, on RBD, zeros will...