Search results

  1. G

    8 server and so what?

    All servers 2x 480gb NVME 4x 3.7 TB NVME 2x 1gb nic 4x 10gb nic
  2. G

    8 server and so what?

    Hi folks I have 8 server. What would you do? 1 - all of them with Ceph and vms Or 2 - 5 nodes to Promox Ceph cluster and 3 server for vms? I'll wait for your thoughts and suggestions.
  3. G

    Issue with qm remote-migrate: Target Node Reports as Too Old Despite Newer Version

    Its seems to expect different version of cloudinit. Try to disable/remove the cloudinit driver and try again
  4. G

    After add a 2nd nvme, lvm-thin just "broke"... sort of...

    Hi folks There is this customer that has a Supermicro server, and a rise card with a RAID card and 1 nvme. The rise card has 3 slot: First slot has a RAID card Second one has an nvme Third - and last - one was empty In the RAID card he had installed Proxmox VE 7.x. (Ok! I know that is time to...
  5. G

    HA cluster with two node and qDevice + CEPH: don't work HA: why?

    You can have it by using GlusterFS.I have many customers with 2node set up with GlusterFS. PVT for more info.
  6. G

    virtio-win-0.1.262-1 Released

    Well at least we got an alternative.
  7. G

    virtio-win-0.1.262-1 Released

    After the installation, I got a rollback, but it's seem that everything is fine
  8. G

    virtio-win-0.1.262-1 Released

    It's seems to work fine when you do in the Windows installation time. I was able to install virio-scsi, NetKVM and balloon.
  9. G

    virtio-win-0.1.262-1 Released

    I just installed a Windows 2022 a couple of hours ago and then use 0.1.248 and works fine.
  10. G

    Backup PVE Bare-metal, any soon?

    Think again https://pbs.proxmox.com/wiki/index.php/Roadmap#Roadmap
  11. G

    Backup PVE Bare-metal, any soon?

    Hi there. Any idea when we would perform a Proxmox VE bare-metal backup? It's coming about? When??
  12. G

    Migrating disk from local-lvm to directory type storage and the disk image fill all the qcow2 image.

    Ooops... Ok folks. My bad! This topic was alread discused here: https://forum.proxmox.com/threads/solved-looks-like-a-qm-movedisk-bug-reclaim-disk-space-when-move-virtual-disk-from-lvm-to-directory.76706/ The vm needs to be offline before move disk. Sorry guys. My bad.
  13. G

    Migrating disk from local-lvm to directory type storage and the disk image fill all the qcow2 image.

    So how to explain that I can create a virtual image with 1TB using qcow2 and directory based storage?? qemu-img info vm-104-disk-1.qcow2 image: vm-104-disk-1.qcow2 file format: qcow2 virtual size: 1 TiB (1099511627776 bytes) disk size: 208 KiB cluster_size: 65536 Format specific information...
  14. G

    Migrating disk from local-lvm to directory type storage and the disk image fill all the qcow2 image.

    As i said, new disk image works fine. I was able to create a 1TB of size in the /vms mounted point, which has just about 900GB! Already tried preallocation off, default, metadata and nothing changes when migra virtual disk from local-lvm to directory based storage.
  15. G

    Migrating disk from local-lvm to directory type storage and the disk image fill all the qcow2 image.

    But new virtual disk are created like thin provision. Look: https://www.youtube.com/watch?v=nW_XZoNDlAg