jsterr's latest activity

  • jsterr
    Will ZFS 2.4 will come with a update within the 9.1.x window or is it planned for 9.2 (april?) https://git.proxmox.com/?p=zfsonlinux.git;a=summary
  • jsterr
    A quick way to look at the file without all the extensive comments: grep -vE '^\s*(#|$)' /etc/lvm/lvm.conf Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • jsterr
    Yeah the customer thought it is needed and forgot about it. Setting it to 1 did the job :-)
  • jsterr
    multipath_component_detection = 0 needed to be set to 1 in /etc/lvm/lvm.conf - customer set the parameter, which caused this error. Thank you all for your help, I learned alot about iSCSI. Even created my own iSCSI-Target in a debian-vm to test...
  • jsterr
    I tried it again with aliases, everything looks correct until I need to create the pv and vg. I opened a proxmox subscription ticket.
  • jsterr
    Thanks to everyone for the help. I managed to fix the reboot loop on my Dell PowerEdge R730xd after upgrading to Proxmox VE 9. The issue was caused by the newest PVE kernel (6.17.x) triggering Machine Check Exceptions on this hardware (older...
  • jsterr
    jsterr replied to the thread CEPH 17.2.8 BluestoreDB bug.
    Quincy is already EOL upgrade to Reef and then maybe even to squid. The Bug you mentioned is only shown for ceph 18 at least on clysos side. But there are other for ceph 19. I personally dont have any issues with 19.x so far.
  • jsterr
    jsterr replied to the thread CEPH 17.2.8 BluestoreDB bug.
    https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef thats the one. (17 to 18) https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid (18 to 19)
  • jsterr
    jsterr reacted to SteveITS's post in the thread CEPH 17.2.8 BluestoreDB bug with Like Like.
    17 and 18 are already EOL, why not upgrade to 19? There is https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid and should be a doc for 18 but I’m not seeing it.
  • jsterr
    We also did 15 upgrades from ceph 19 to 20 in nested pve-environments (all clean installs from training) all went fine without issues. The only thing which was not 100% clear howto check what mds-services are standby via cli: We looked at the...
  • jsterr
    You could also boot the debug mode from the installer and try to capture the logs, the debugging mode also helps to identify on which step exactly in the installer the reboot will be triggerd.
  • jsterr
    Can you try on a identical machine to see if its hardware related (as it sounds like?)
  • jsterr
    /dev/mapper/mpathx are just dynamic aliases created by multipathd based on order of device discovery, they are not guaranteed to be consistent across hosts or even across reboots. Adding or deleting iSCSI targets will also cause them to be...
  • jsterr
    "wipefs -a" one of the devices in the group, if you still cant access mpath device. Remove the iSCSI storage pools, remove any nodes/sessions with iscsiadm, reboot the node, optionally remove/re-init the LUNs on SAN side. run "vgcreate" with...
  • jsterr
    Geht nur wenn die VMs nicht im HA sind, aber wenn man HA hat, macht man eher eh ne Live-Migration bei ner Wartung ;-)
  • jsterr
    Someone has a idea to further troubleshoot this? The lvm filter should not be needed - the steps I took match with the guide from proxmox. Is my storage.cfg correct with the 4 iscsi-entries? iscsi: iscsi portal 10.10.1.71 target...
  • jsterr
    Hello check: https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE https://pve.proxmox.com/wiki/Advanced_Migration_Techniques_to_Proxmox_VE 7 hours sounds wrong but depends on the network you use and how much data you can transfer. But post a...
  • jsterr
    * EFI Disk via Hardware hinzufügen * Unter Umständen noch EFI Boot Einträge setzen: https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries * Boot-Order setzen / kontrollieren * Windows kann mit Virtio nicht sofort starten, weil es nicht die Treiber...
  • jsterr
    Yea I wondered about that as well, the "Thin Provision" Option in Datacenter -> Storage -> ZFS is kinda confusing, as in both cases (not enabled, enabled) it is thin-provisioned. ZFS Reservation just wont let you create any new disks if...
  • jsterr
    jsterr reacted to aaron's post in the thread Thick Provisioning to Thin Provisioning with Like Like.
    depends on the storage. If you import to ZFS for example and the "thin provision" option is enabled, then it will be thin, as in, zeros won't be written*. On RBD, you would need to do a trim/discard after the import, as, IIRC, on RBD, zeros will...