Search results

  1. L

    [SOLVED] Restore of container including mountpoint

    answering my own question: it looks like, it only works from CLI: pct restore <vmid> <backup file> --rootfs=<storage>:<size> --mp0=<storage>:<size>,mp=<path> for reference: https://pve.proxmox.com/pve-docs/pct.1.html
  2. L

    [SOLVED] Restore of container including mountpoint

    Hello all, I'm trying to move some containers from a server running 7.4 to a server running 8.0. Everything works fine as long as the container don't include mountpoints. Unfortunately there is one container including a mountpoint. The mountpoint is included in the backup, but when I do the...
  3. L

    [SOLVED] ceph: corrupt disk image

    everything looks normal again. thank you again for your professional support. Cheers, luphi
  4. L

    [SOLVED] ceph: corrupt disk image

    strange: restarting OSD.41, no improvement restarting OSD.22... I lost ssh connection the the node hosting OSD.22 and was not able to reconnect. I decided to reboot from the local console. currently it's recovering, I will leave it running until tomorrow... At least, I was able to delete the...
  5. L

    [SOLVED] ceph: corrupt disk image

    The other two are hanging # ceph pg 129.7 query { "snap_trimq": "[]", "snap_trimq_len": 0, "state": "peering", "epoch": 2516138, "up": [ 41, 22, 71 ], "acting": [ 41, 22, 71 ], "info": { "pgid": "129.7"...
  6. L

    [SOLVED] ceph: corrupt disk image

    # ceph pg ls|grep -v clean PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE SINCE VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP...
  7. L

    [SOLVED] ceph: corrupt disk image

    Reduced data availability: 3 pgs inactive, 2 pgs peering pg 118.34 is stuck peering for 104m, current state remapped+peering, last acting [22,48,66] pg 129.7 is stuck peering for 2h, current state peering, last acting [41,22,71] pg 143.17 is stuck inactive for 104m, current state activating...
  8. L

    [SOLVED] ceph: corrupt disk image

    # ceph -s cluster: id: 607db34c-b13e-47a3-8a73-48fc46bdc941 health: HEALTH_WARN Reduced data availability: 3 pgs inactive, 2 pgs peering 2 daemons have recently crashed 106 slow ops, oldest one blocked for 3302 sec, daemons [osd.22,osd.41] have...
  9. L

    [SOLVED] ceph: corrupt disk image

    Hello all, I have a corrupted disk image located in one of my ceph pools: rbd ls -l -p ceph_hdd_images rbd: error opening vm-173-disk-1: (2) No such file or directory NAME SIZE PARENT FMT PROT LOCK vm-165-disk-0 50 GiB 2 excl vm-173-disk-0 10 GiB...
  10. L

    [SOLVED] hyperconverged ceph cluster network question

    Hello Aaron, that were also my thoughts. Thank you for confirmation. Cheers, luphi
  11. L

    PVE-Headers

    Do you have the community repo configured? Cheers, luphi
  12. L

    2 DataCenters in Proxmox

    Be aware, that if you use the PIs for testing and break them, you may break the whole cluster. See warning above. Cheers, luphi
  13. L

    [SOLVED] hyperconverged ceph cluster network question

    all, I have a dual port 10GB NIC for CEPH in each server of a three node cluster. Option 1: using a LACP bond for private and public CEPH traffic and distribute the bonds across different switches Option 2: separated private and public networks without redundancy Which option would one...
  14. L

    VMs unable to boot from ZFS after upgrade to PVE 7

    I think, it's related to my ugly disk setup: root@pve:/var/lib/vz# fdisk -l /dev/sdg Disk /dev/sdg: 1.86 TiB, 2048408248320 bytes, 4000797360 sectors Disk model: Samsung SSD 860 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size...
  15. L

    VMs unable to boot from ZFS after upgrade to PVE 7

    possible root cause: the system is still running kernel 5.4 next issue: 5.11 panics immediately Cheers, luphi
  16. L

    VMs unable to boot from ZFS after upgrade to PVE 7

    Hello all, I just updated from 6.4 to 7. Everything went smooth, but.... I have an issue booting VMs (CTs are doing fine) as long as any of the assigned disks is on ZFS. It doesn't matter whether the VM tries to boot from CD or the disk. as soon as there is a disk configured based on ZFS, the...
  17. L

    VM interruption during backups - vzdump Disable RAM in snapshot mode?

    The issue even exits in PVE 7 and yes I'm using ZFS I'm going to switch the network driver from virtio to e1000 tomorrow....
  18. L

    VM interruption during backups - vzdump Disable RAM in snapshot mode?

    Hello mko, I'm facing the same issues. Have you been able to fix it in the meantime? Cheers, luphi
  19. L

    proxmox 7.0 sdn beta test

    Hello all, is there a way to add SDN resources to a pool, like one can do with VMs and storage resources? Cheers, luphi
  20. L

    backup: PBS vs legacy

    All, since the proxmox backup server is released, what might be a good reason to stay with the old legacy backup solution? Cheers, luphi

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!