Reduce Proxmox boot partition

Discussion in 'Proxmox VE: Installation and configuration' started by carsten2, Feb 8, 2019.

  1. carsten2

    carsten2 Member

    Joined:
    Mar 25, 2017
    Messages:
    47
    Likes Received:
    3
    I have two 512 GB mirrored SSD as the promox boot disk with ZFS, and want to reduce the root file system paritition, so I can add ZIL/SLOG for the separate ZFS data pool. This gives a great improvment, as the system SSDs are idle in most of the time anyway, so there is to conflict between promox system and SLOG/ZLOG for another ZFS pool on the same drive. I did this already by installing to a smaller disk, than adding a mirror with a larger disks, leaving free space at the end.

    How is it possible to reduce the rpool root partition online (remote server). E.g. removing the second mirror disk, repartition, send ZFS file systems to second disk, and somehow make proxmox boot from second disk?

    Any step by step instructions?
     
  2. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    4,257
    Likes Received:
    269
    Hi,

    ZIL/SLOG should have there dedicated disk. Putting them on another used disk can reduce the performance extreme.
    Anyway, you cant shrink a zfs disk without new installation.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. carsten2

    carsten2 Member

    Joined:
    Mar 25, 2017
    Messages:
    47
    Likes Received:
    3
    This is not an answer to my question:

    1) The performance reduction is only theoretically and in fact insignificant or even non present, because the proxmox system disk is 99% idle. On the contrary: This change in system configuration is the cheapest and easiest major performance improvement that can be done without adding additional hardware. I already did this in installations (by using a smallker disk in the intial install), and the results are, that the systems are MUCH faster afterwards.

    2) It is definitively possible and I already pointed out which way to go, i.e. breaking the ZFS mirror, creating a new smaller ZFS, sending the original ZFS file system to the new one and then somehow switch boot order. The question was, if someone has done this already, has additional remakes for caveats or omissions or in the best case step-by-step instructions, how to do it.
     
  4. carsten2

    carsten2 Member

    Joined:
    Mar 25, 2017
    Messages:
    47
    Likes Received:
    3
  5. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,405
    Likes Received:
    296
    There is a lot of manual tweaking involved. I'd suggest to try it out in a local copy of your online server and do it multiple times while recording all commands etc.
    In general, everything is possible, the question is the amount of work you have to put in there. You have to create a new pool with a new name, change all references to rpool inside of your root-FS to the new pool. Best would be to boot a zfs-enabled live linux (or install ZFS afterwards on your live linux) on your server and do the migration offline. You can skip the whole renaming thing because you can just import your new pool as the old name after removing the old rpool.

    The cheapest option to build a fast system is to have spinners and only (one, maximum two) small enterprise-grade ZIL/SLOG devices (32 GB are totally enough) and have only one pool for everything with as much vdevs as possible. It does not make sense to have two ZFS pools that'll share the available ARC if it is not necessary.
     
  6. alexskysilk

    alexskysilk Active Member

    Joined:
    Oct 16, 2015
    Messages:
    475
    Likes Received:
    51
    There is no easy way to do this after the fact. As a fresh install you can do it by installing debian first with hard partitions, leaving free space for your others (log/data/etc)

    I'm really curious; have you done this and realized ANY improvement? I'd really be interested in a system config and before/after usecase benchmarks.
     
  7. carsten2

    carsten2 Member

    Joined:
    Mar 25, 2017
    Messages:
    47
    Likes Received:
    3
    Yes, the improvement is much better with log.

    With Log: pveperf FSYNCS/SECOND: 1404.95
    Without Log: pveperf FSYNCS/SECOND: 96.31
    Factor 14

    Copying large amounts of files to a windows VM: Transferspeed +40%
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice