Recent content by mbosma

  1. M

    Cloning from ZFS snapshot in GUI

    Hi all, We're running into a limitation in the GUI when trying to clone a vm from a snapshot on ZFS storage. On LVM this works fine, I know this works a bit different and also allows for non-linear snapshots which ZFS doesn't. A full clone however should be possible in my AFAIK. I'm able to...
  2. M

    VM update from Server 2022 to Server 2025

    Does it work if you change the OS disk to IDE? If so, after the upgrade you could add another disk as virtio-scsi, install the drivers and change your OS disk back to virtio-scsi.
  3. M

    directory I don't see all the space

    In that case you could share the disk using NFS and mount this on both nodes from the Proxmox GUI. This way the storage can be marked as shared because it's available on both machines.
  4. M

    SSD wearout at 99%

    If it were true, yes it would be very bad. Given that your disk only has about 400gb of writes that is impossible. Seems like there's a bug in the firmware which counte the media_wearout_indicator the wrong way around...
  5. M

    Allow edit of initial memory size

    Hi Devs, Using memory hotplug according to the proxmox wiki I succesfully increased and decreased the memory size of a Debian/Ubuntu guest. However when booting a virtual machine with memory hot-plug enabled and more than 21GB memory the guest OS can kernel panic with the following error...
  6. M

    Fleecing on Ceph RBD storage

    I'll look into adding a nvme disk as well @spirit. The servers all have pm981 disks installed as they were repurposed vmware servers with them as ex-boot disks. I'm not quite sure I'd like to use those because of the low write endurance. Would you recon the amount of writes will be low enough to...
  7. M

    Fleecing on Ceph RBD storage

    I tried finding some recommendations on using fleecing on Ceph storage but couldn't find an answer yet. We're running a fresh PVE cluster and PBS server on versions 8.2 and 3.2. This cluster is using Ceph as blockstorage with nvme disks on a 2x10G link using LACP. The same link is being used...
  8. M

    Random zfs replication errors

    Thank you for the update! I mitigated the issue by switching to secure, I'll update the package once we get a maintenance window from the client.
  9. M

    ZFS Installation: Storage Discrepancies and Overwriting Concerns

    Hi Widely, It won't show up as "used storage" in the other location as this data is not used on that location. You will see however that the total storage of the other location is lower than before. In this case you'll see the total size of "local" in- and decreasing in size this past year due...
  10. M

    [SOLVED] Upgrade PBS 2 to 3 while PVE still on V7

    I have a few clusters still on pve7 with a pbs on version 3. So far no issues, you just can't use the newer features like namespaces.
  11. M

    Integrating Proxmox SDN with existing SDN network

    I can see your patch in the mailing list, would it help if I created a feature request in git?
  12. M

    opened a new thread: USB 3.0 speed slow - I was wrong it is the Ethernet Port--- But now?

    I'm sorry to hear that, I've been running this setup with okay results. root@pbs1:~# dd if=/dev/zero of=testfile.bin bs=4G count=1 0+1 records in 0+1 records out 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 6.73434 s, 319 MB/s This is done using an Intel s3610 1,92,tb sata ssd in a 2.5" enclosure...
  13. M

    Opnsense on a Proxmox VE 8 with a single NIC

    Hi Jarvar, It is possible to assign a vlan in Opnsense or create an interface with vlan managed in proxmox. The problem with this however is that you can only have a vlan set-up in one of these two ways at the time. This is a limitation of the inner workings of linux bridges. The solution for...
  14. M

    opened a new thread: USB 3.0 speed slow - I was wrong it is the Ethernet Port--- But now?

    1. Get the ID of your disk: michael@next-michael:~$ ls /dev/disk/by-id/ dm-name-cryptdata nvme-eui.002538bb21047659-part2 dm-name-cryptswap nvme-eui.002538bb21047659-part3...
  15. M

    Thin pool pve-data-tpool (253:4) transaction_id is 32, while expected 29.

    As a matter of fact I do, had to do this solution once more a while back. These are the steps: 1. Backup the VG data: vgcfgbackup pve -f lvbackup 2. Edit the ID to match what it's expecting: vim lvbackup 3. restore the backup you made: vgcfgrestore pve -f lvbackup --force You're not done...