Search results

  1. S

    Encrypted drive migration

    I created a (nested) server (LXC with encrypted drive) for my personal web-site ( https://sami.mattila.eu ) because I have invested a lot of time and effort in the Genealogy "branch" of it. (Pun intended.) I would like to move it to another Proxmox host but Proxmox always complains about the...
  2. S

    Encrypted drive migration

    I'm having problems migrating an encrypted drive. Is there any quick guide how to do it?
  3. S

    IO error on virtual zfs drive after power loss

    That's a good tip. I actually tried that but it failed. So far -FX is only thing that seems to work (it's still running.) Apparantly the X and T are last option only. So far I have only got one kernel error from zfs. PANIC: zfs: adding existent segment to range tree. So far so good. It's good to...
  4. S

    IO error on virtual zfs drive after power loss

    I'm running zpool import -FX vdd now and it's actually doing something instead of just complaining. Seems to run quite long time.
  5. S

    IO error on virtual zfs drive after power loss

    Other forum suggested these steps: zpool import -F, then -FX, then -T in that order. (First take a backup of the original media.)
  6. S

    IO error on virtual zfs drive after power loss

    ... and we are back at my original question... "How does one go about fixing zfs problems inside virtual drive?"
  7. S

    IO error on virtual zfs drive after power loss

    root@vm2404:~# zpool import vdd -f cannot import 'vdd': I/O error Destroy and re-create the pool from a backup source.
  8. S

    IO error on virtual zfs drive after power loss

    Inside the VM: root@vm2404:~# zpool import pool: vdd id: 4588309049495493978 state: FAULTED status: The pool metadata is corrupted. action: The pool cannot be imported due to damaged devices or data. The pool may be active on another system, but can be imported using...
  9. S

    IO error on virtual zfs drive after power loss

    If there is an IO error it's inside the container.
  10. S

    IO error on virtual zfs drive after power loss

    It's odd. Proxmox sees the ZFS drive but says... Could not activate storage. ZFS error. Can not import. IO error (500).
  11. S

    IO error on virtual zfs drive after power loss

    How is that relevant to the (corrupted) ZFS drive?
  12. S

    IO error on virtual zfs drive after power loss

    How does one go about fixing zfs problems inside virtual drive? Should it be done inside the VM? (Might be difficlut if you can't start it.) or Should it be done on the host? What commands do you use?
  13. S

    Cannot install proxmox 7.1 with usb

    I'm afraid this is all that Proxmox gives... wrong proxmox cd-id Ugh... I have spoken... and say no more.
  14. S

    Cannot install proxmox 7.1 with usb

    Sry. No. I have done it a million times but when I tried it with v7.2 the first two USB's I tried did not work any more. They always have in the past. When I tried with a new one DD worked. Same command. wrong proxmox cd-id
  15. S

    Cannot install proxmox 7.1 with usb

    It's a pity. Last couple Proxmox ISO images seem somehow very particular what USB they are using. I can't use DD to image the older USB drives anymore and I'm not about to install 3rd party software to do something this basic.
  16. S

    Moving from Amavis to new and improved rspamd

    Normally PMG uses about 1Gb of RAM per VM. When Clam loads a new virus DataBase it reloads the new DB before it drops the old effectively doubling RAM usage to 2Gb temporarily. There is an easy way to configure Clam to drop the old DB before starting the new one. echo "ConcurrentDatabaseReload...
  17. S

    Moving from Amavis to new and improved rspamd

    Come on Dietmar :) Decade ago You took my advice with LXC and ZFS and Proxmox is rocking now. Eventually you will move to Rspamd. It's just a queston of time.
  18. S

    Solved my ZFS scrub problems.

    it seems that a "new" sequential scrub algorithm for ZFS is causing head aches on some of our Nested systems. The "new" metadata scan reads through the structure of the pool and gathers an in-memory queue of I/Os, sorted by size and offset on disk. The issuing phase will then issue the scrub...