Search results

  1. F

    cephfs mount inside LXC

    Did you find any solution ? Using mount point (mp0) (only via command-line), does the job. BUT, it's then impossible to migrate the VM to another host. Using the kernel driver or the fuse driver both fails because of missing modules... 1. Did you find any solution to mount a cephfs inside an...
  2. F

    [Solved] HTML table broken for vzdump backup status email reports (PVE 7.2-14)

    Thanks. Same conclusion. I mark it as solved for now. I'll try to send them to another mail client / or gmail.
  3. F

    [Solved] HTML table broken for vzdump backup status email reports (PVE 7.2-14)

    Sure ... email headers ... This is a multi-part message in MIME format. ------_=_NextPart_001_16685657933653369 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit VMID NAME STATUS TIME SIZE FILENAME 120 dns1...
  4. F

    [Forum issue] Webp images are rejected by forum

    Efficient screenshots.WebP images are rejected by the forum. A bit annoying to have to convert files to greedy PNG or other format when WebP is supported by all browsers. Regards,
  5. F

    [Solved] HTML table broken for vzdump backup status email reports (PVE 7.2-14)

    Hi, All the HTML is broken on vzdump backup reports: All VMs appears under the same vmid. Time, size, and filename columns are missing (only headers). I did cut the screenshot in 3 to remove all useless parts, but all appear under the single vmid 120. This seems like a backup log parsing issue...
  6. F

    What is the "Regenerate Image" in Cloud-Init section ?

    I use cloud-init image, so I have the Cloud-Init section. When I change a setting, it's just changing it after reboot, as expected. So I feel the Cloud-Init cdrom image is automatically rebuild on change, and I never needed this "Regenerate Image" button. "Regenerate Image" is missing from...
  7. F

    [SOLVED] [Backup & Restore] How to create a CT from an existing volume (disk-image)

    Thanks Fabian, Just created the <vmid>.conf (with an unused vmid), and it worked. Just a bit odd that we can't do it from the GUI or pvesh. root@pve1:/# cat /etc/pve/nodes/pve1/lxc/198.conf arch: amd64 cores: 1 features: nesting=1 hostname: ubutst1 memory: 512 net0...
  8. F

    [SOLVED] [Backup & Restore] How to create a CT from an existing volume (disk-image)

    Concern: Backup & Restore I restored a volume (from ceph RBD), and want to start a CT from it. How should I create & start a CT from which I have the disk image ? Preferably without re-creating the /etc/pve/nodes/{node}/lxc/{vmid}.conf config file from an existing one. (Consider I lost the...
  9. F

    How to create a container from command-line (pct create) ?

    Great, thanks. I works. Working command line: pct create 117 /mnt/pve/cephfs/template/cache/jammy-minimal-cloudimg-amd64-root.tar.xz --hostname gal1 --memory 1024 --net0 name=eth0,bridge=vmbr0,firewall=1,gw=192.168.10.1,ip=192.168.10.71/24,tag=10,type=veth --storage localb lock --rootfs...
  10. F

    How to create a container from command-line (pct create) ?

    Hi, I'm blocking on this. Why does it not create the volume ? root@pve1:~# pct create 117 /mnt/pve/cephfs/template/cache/jammy-minimal-cloudimg-amd64-root.tar.xz --hostname gal1 --memory 1024 --net0 name=eth0,bridge=vmbr0,firewall=1,gw=192.168.10.1,ip=192.168.10.71/24,tag=10,type=veth...
  11. F

    Ceph 16.2.7 Pacific cluster Crash

    More details in the bug report : https://tracker.ceph.com/issues/53899 At the end of the bug report, you'll see that I did find a solution to get out of this problem by extending the underlying LV. (I run bluestore on LVM). But really no clue (yet) on how to recover from this situation on...
  12. F

    Ceph 16.2.7 Pacific cluster Crash

    Additional info: root@pve1:~# ceph status cluster: id: e7628d51-32b5-4f5c-8eec-1cafb41ead74 health: HEALTH_WARN 1 filesystem is degraded 1 MDSs report slow metadata IOs mon pve3 is low on available space 2 osds down 3...
  13. F

    Ceph 16.2.7 Pacific cluster Crash

    Hi, Sounds like we're on the same ship... Jan 16 02:30:17 pve3 ceph-osd[33049]: 27: (OSD::init()+0x58d) [0x558dfdc8e5ed] Jan 16 02:30:17 pve3 ceph-osd[33049]: 28: main() Jan 16 02:30:17 pve3 ceph-osd[33049]: 29: __libc_start_main() Jan 16 02:30:17 pve3 ceph-osd[33049]: 30: _start() Jan 16...
  14. F

    [SOLVED] How to delete a VM or Container that has no storage and no or missing storage pool.

    Hi, root@hystou3:~# pvesm add dir pool1 --path /mnt/pool1 root@hystou3:~# pvesm list pool1 Volid Format Type Size VMID root@hystou3:~# pct destroy 118 unable to parse directory volume name 'vm-118-disk-0' root@hystou3:~# pct destroy 118 --force 1 --purge 1 unable to parse directory volume...
  15. F

    [SOLVED] How to delete a VM or Container that has no storage and no or missing storage pool.

    You mean 118.conf, but no, there's none in qemu-server. However, thanks for this double check, since I found the definition of the container in : /etc/pve/lxc/118.conf So, the lxc-destroy did not remove it. After digging a bit more in the docs, the Proxmox way to manage lxc is with the pct...
  16. F

    [SOLVED] How to delete a VM or Container that has no storage and no or missing storage pool.

    Just realized, since it's an LXC, qm might not be the best way ;-) So : root@hystou2:~# lxc-destroy --name=118 root@hystou2:~# lxc-destroy --name=118 lxc-destroy: 118: tools/lxc_destroy.c: main: 242 Container is not defined root@hystou2:~# grep 118 /etc/pve/.vmlist "118": { "node": "hystou3"...
  17. F

    [SOLVED] How to delete a VM or Container that has no storage and no or missing storage pool.

    Hi, I have an old VM that was existing on an old pool that I stopped and deleted in between. The VM is still listed in the web GUI, but I can't delete it. From the GUI, I have the error : TASK ERROR: storage 'pool1' does not exist (which is true). And from the command line: root@hystou2:~#...
  18. F

    [SOLVED] How to re-sync Ceph after HW failure ?

    Hi, yes, I stopped the mon for the time to extract the map. Problem is fixed. Thanks !
  19. F

    [SOLVED] How to re-sync Ceph after HW failure ?

    Hi, Thanks for the tip. Here is what I did, from the new node (hystou1), without stopping the remaining available monitors: ln -s /etc/pve/ceph.conf /etc/ceph/ mkdir /root/tmp ceph auth get mon. -o /root/tmp/key ceph mon getmap -o /root/tmp/map ceph-mon -i `hostname` --mkfs --monmap...
  20. F

    [SOLVED] How to re-sync Ceph after HW failure ?

    Hi, After a HW failure in a 3 nodes Proxmox VE 6.3 cluster, I replaced the HW, and re-joined the new node. The replaced node is called hystou1, the 2 other nodes are hystou2 and 3. I had a couple of minor issues when re-joining the new node since it has the same name, and I had to remove...