Search results

  1. B

    Can I rename the boot zfs pool?

    I've decided it will be quicker to re-install, hopeing that the "advanced" mode in the installer will allow me to set the pool name.
  2. B

    Can I rename the boot zfs pool?

    I've tried it - renaming the pool under Ubuntu: zpool export rpool zpool import rpool rpool0 zpool set bootfs=rpool0 rpool0 zpool export rpool0 But when I reboot into pve, the pool is still named "rpool"!
  3. B

    Can I rename the boot zfs pool?

    I've installed with the system partition as a zfs mirror. It called it "rpool" which unfortunately clashes with the name of the data zfs pool I already had. Can I rename the boot zfs pool without compromising the boot process? I realize I'll have to do that with a zfs enabled system booted...
  4. B

    Just checking before buying my new machine

    It looks ok to me, however I'd think about some redundancy on the SSD/HD side.
  5. B

    i5-12500 vs i7-12700 - homelab

    I'd spend money on RAM rather than a faster processor. Only my opinion YMMV.
  6. B

    [SOLVED] Download PVE8 packages and continue upgrade offline

    Thanks for this. How did you image the system disk? Did you manage to do that while connected remotely? I'd normally use clonezilla for that, but it requires local access. A Colleague has told me he has done an upgrade sucessfully with one VM still running, although his was not the router VM...
  7. B

    [SOLVED] Download PVE8 packages and continue upgrade offline

    PS Forgot to ask - did you do it remotely or directly connected to the server?
  8. B

    [SOLVED] Download PVE8 packages and continue upgrade offline

    Good news. I'll try it (I've already got good backups of all my VMs and containers). I'll report here - maybe not for a day or so though.
  9. B

    [SOLVED] Download PVE8 packages and continue upgrade offline

    I've got the same problem (and the server is in my garage!). In the past I've configured a router/modem and attached that to the network, killed the router VM, altered the gateway address and then followed the update instructions. However I'd be glad to hear that your technique would work...
  10. B

    Shifting window to proxmox

    https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE I used Clonezilla to create an image and then ran clonezilla on the VM and restored it. Note the need for mergeide.reg though.
  11. B

    Unable to log in

    You could use a Homeplug type connection which uses the Mains wiring to make a connection. I connect to my ProxMox server in the garage this way. Not Max speed, but more than adequate.
  12. B

    LXC images

    You could look here: https://github.com/tteck/Proxmox
  13. B

    [SOLVED] Unable to delete linked Templates

    ok, I've managed to delete them all now, by following back from the latest to the earliest (the problem was that one of my serialised names was out of step). Many thanks for your help and patience!
  14. B

    [SOLVED] Unable to delete linked Templates

    So it is, however: root@pve:~# qm destroy 107 --purge base volume 'rpool1:base-131-disk-0/base-107-disk-0' is still in use by linked cloned root@pve:~# qm destroy 107 base volume 'rpool1:base-131-disk-0/base-107-disk-0' is still in use by linked cloned root@pve:~#
  15. B

    [SOLVED] Unable to delete linked Templates

    root@pve:~# pvesh get /nodes/pve/storage/rpool1/content --output-format json-pretty "my" variable $node masks earlier declaration in same scope at /usr/share/perl5/PVE/API2/Disks/ZFS.pm line 345. [ { "content" : "images", "format" : "raw", "name" : "base-109-disk-0"...
  16. B

    [SOLVED] Unable to delete linked Templates

    That is the VM 131 (see above) - the last in the chain - I have already deleted all the linked VMs. The complete chain of templates has drives in two storage areas, I'll share those a little later.
  17. B

    [SOLVED] Unable to delete linked Templates

    ... and if I try to delete the final template, I get the same error message: root@pve:~# qm destroy 131 base volume 'rpool1:base-117-disk-0/base-131-disk-0' is still in use by linked cloned root@pve:~# qm destroy 131 --purge base volume 'rpool1:base-117-disk-0/base-131-disk-0' is still in use by...
  18. B

    [SOLVED] Unable to delete linked Templates

    root@pve:~# qm destroy 106 --purge base volume 'local-zfs:base-105-disk-0/base-106-disk-0' is still in use by linked cloned root@pve:~# qm destroy 106 base volume 'local-zfs:base-105-disk-0/base-106-disk-0' is still in use by linked cloned root@pve:~#
  19. B

    [SOLVED] Unable to delete linked Templates

    I am trying to just remove these template entirely (all of them), it is begining to sound as though this is not possible through the commands?
  20. B

    [SOLVED] Unable to delete linked Templates

    105 is the oldest in the series of templates, 106 is the next one in the series. root@pve:~# pvesh get /nodes/pve/storage/local-zfs/content --output-format json-pretty | grep 105 "my" variable $node masks earlier declaration in same scope at /usr/share/perl5/PVE/API2/Disks/ZFS.pm line 345...