Recent content by luphi

  1. L

    ESXi import via proxy

    Hi all, is there any way to connect to an ESXi server via a proxy? The cluster wide proxy configuration is set but not used for the storage connection. Cheers, luphi
  2. L

    [SOLVED] Restore of container including mountpoint

    answering my own question: it looks like, it only works from CLI: pct restore <vmid> <backup file> --rootfs=<storage>:<size> --mp0=<storage>:<size>,mp=<path> for reference: https://pve.proxmox.com/pve-docs/pct.1.html
  3. L

    [SOLVED] Restore of container including mountpoint

    Hello all, I'm trying to move some containers from a server running 7.4 to a server running 8.0. Everything works fine as long as the container don't include mountpoints. Unfortunately there is one container including a mountpoint. The mountpoint is included in the backup, but when I do the...
  4. L

    [SOLVED] ceph: corrupt disk image

    everything looks normal again. thank you again for your professional support. Cheers, luphi
  5. L

    [SOLVED] ceph: corrupt disk image

    strange: restarting OSD.41, no improvement restarting OSD.22... I lost ssh connection the the node hosting OSD.22 and was not able to reconnect. I decided to reboot from the local console. currently it's recovering, I will leave it running until tomorrow... At least, I was able to delete the...
  6. L

    [SOLVED] ceph: corrupt disk image

    The other two are hanging # ceph pg 129.7 query { "snap_trimq": "[]", "snap_trimq_len": 0, "state": "peering", "epoch": 2516138, "up": [ 41, 22, 71 ], "acting": [ 41, 22, 71 ], "info": { "pgid": "129.7"...
  7. L

    [SOLVED] ceph: corrupt disk image

    # ceph pg ls|grep -v clean PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE SINCE VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP...
  8. L

    [SOLVED] ceph: corrupt disk image

    Reduced data availability: 3 pgs inactive, 2 pgs peering pg 118.34 is stuck peering for 104m, current state remapped+peering, last acting [22,48,66] pg 129.7 is stuck peering for 2h, current state peering, last acting [41,22,71] pg 143.17 is stuck inactive for 104m, current state activating...
  9. L

    [SOLVED] ceph: corrupt disk image

    # ceph -s cluster: id: 607db34c-b13e-47a3-8a73-48fc46bdc941 health: HEALTH_WARN Reduced data availability: 3 pgs inactive, 2 pgs peering 2 daemons have recently crashed 106 slow ops, oldest one blocked for 3302 sec, daemons [osd.22,osd.41] have...
  10. L

    [SOLVED] ceph: corrupt disk image

    Hello all, I have a corrupted disk image located in one of my ceph pools: rbd ls -l -p ceph_hdd_images rbd: error opening vm-173-disk-1: (2) No such file or directory NAME SIZE PARENT FMT PROT LOCK vm-165-disk-0 50 GiB 2 excl vm-173-disk-0 10 GiB...
  11. L

    [SOLVED] hyperconverged ceph cluster network question

    Hello Aaron, that were also my thoughts. Thank you for confirmation. Cheers, luphi
  12. L

    PVE-Headers

    Do you have the community repo configured? Cheers, luphi
  13. L

    2 DataCenters in Proxmox

    Be aware, that if you use the PIs for testing and break them, you may break the whole cluster. See warning above. Cheers, luphi
  14. L

    [SOLVED] hyperconverged ceph cluster network question

    all, I have a dual port 10GB NIC for CEPH in each server of a three node cluster. Option 1: using a LACP bond for private and public CEPH traffic and distribute the bonds across different switches Option 2: separated private and public networks without redundancy Which option would one...
  15. L

    VMs unable to boot from ZFS after upgrade to PVE 7

    I think, it's related to my ugly disk setup: root@pve:/var/lib/vz# fdisk -l /dev/sdg Disk /dev/sdg: 1.86 TiB, 2048408248320 bytes, 4000797360 sectors Disk model: Samsung SSD 860 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size...