Search results

  1. B

    Crontab and Updates

    Not so far...had a cron entry for the last year, and not lost it yet.
  2. B

    The server gets stuck during the boot phase, with the message: (chain on hard drive failed)

    This may not be your problem, but i've had problems with hanging boot phase when an external USB drive is plugged in, on PVE 5. Now on 6 and no idea if it still happens.
  3. B

    mdadm error since upgrading to 6.3.

    I rebooted, and now get this: This is an automatically generated mail message from mdadm running on pve A DegradedArray event had been detected on md device /dev/md127. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [linear] [multipath]...
  4. B

    mdadm error since upgrading to 6.3.

    # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.0 UUID=a8dad329:be8f6a48:f913d6f8:d60ce5e6 name=localhost.localdomain:0 #...
  5. B

    mdadm error since upgrading to 6.3.

    root@pve:~# ls -lh /dev/zvol/rpool/data/ | grep zd112p1 lrwxrwxrwx 1 root root 16 Nov 28 09:24 vm-100-disk-1-part1 -> ../../../zd112p1 root@pve:~# VM 100 is my main email and shared folder server. It certainly does have degraded partitions, that must be the boot partition. Why is it being...
  6. B

    mdadm error since upgrading to 6.3.

    root@pve:~# ls /dev/zvol/rpool/ data/ swap root@pve:~# ls /dev/zvol/rpool/ data swap root@pve:~# ls /dev/zvol/rpool/data/ base-102-disk-0 vm-102-disk-0-part1 vm-113-disk-0 base-110-disk-0 vm-102-disk-0-part2 vm-113-disk-0-part1 base-110-disk-0-part1 vm-103-disk-0...
  7. B

    mdadm error since upgrading to 6.3.

    Perhaps its left over from an earlier attempt at loading pve on one of the 3 discs in the ZFS set. I seem to have two partitions and a full disc in it. I may have had to install debian, then pve on top, rather than using the dedicated iso. I had a number of goes at it before getting something...
  8. B

    mdadm error since upgrading to 6.3.

    None of my VMs have /etc/md/0 AFAICT root@pve:~# fdisk /dev/zd112p1 Welcome to fdisk (util-linux 2.33.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. The old linux_raid_member signature will be removed by a write command. Device...
  9. B

    mdadm error since upgrading to 6.3.

    Can someone explain? This is an automatically generated mail message from mdadm running on pve A DegradedArray event had been detected on md device /dev/md/0. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [linear] [multipath] [raid0]...
  10. B

    Upgrade to 6.3 fails

    Really? Not in the past. However I've done a "dist-upgrade" now, and things seem fine. I generally do an upgrade so to avoid new kernels and the associated re-boot until I am able and ready. Up to now new versions of pve have installed fine.
  11. B

    Upgrade to 6.3 fails

    root@pve:~# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following package was automatically installed and is no longer required: pve-kernel-5.4.34-1-pve Use 'apt autoremove' to remove it. The...
  12. B

    Unable to mount backup

    root@pve:~# mdadm --examine /dev/loop0 /dev/loop0: MBR Magic : aa55 Partition[0] : 512000 sectors at 2048 (type fd) Partition[1] : 3774359552 sectors at 514048 (type 8e) root@pve:~# I am all for pragmatism, but this VM has 1.5Tb of data! and thus I cannot easily...
  13. B

    Unable to mount backup

    Thanks for your help. The VM is a server running in a degraded raid mode (only one disc). It is an smeserver based on Centos 6 (yes I know it is EOL) and the installation when done did not give the option for a non raid install (or at least I did not find it!). Just looked at your link. Phew...
  14. B

    Unable to mount backup

    root@pve:~# ls /dev/loop0* /dev/loop0 /dev/loop0p1 /dev/loop0p2 root@pve:~# mount /dev/loop0p2 /mnt/vzsnap0 mount: /mnt/vzsnap0: unknown filesystem type 'LVM2_member'. root@pve:~# mount -t ext4 /dev/loop0p2 /mnt/vzsnap0 mount: /mnt/vzsnap0: wrong fs type, bad option, bad superblock on...
  15. B

    Unable to mount backup

    Progress! Now i've got the image mapped, how can I access it from the command line? I've tried mount: root@pve:~# proxmox-backup-client map vm/100/2020-10-16T22:45:02Z drive-virtio0.img --repository "root@pam@192.168.100.98:pbsbackuplocal" Password for "root@pam": ******** Image...
  16. B

    Unable to mount backup

    root@pve:~# proxmox-backup-client map vm/100/2020-10-16T22:45:02Z pbsbackuplocal --repository "root@pam@pveserver.bjsystems.co.uk:pbsbackuplocal" Password for "root@pam": ******** Error: Can only mount/map pxar archives and drive images. root@pve:~# I guess I am still not understanding this!
  17. B

    Unable to mount backup

    root@pve:~# proxmox-backup-client mount vm/100/2020-10-16T22:45:02Z pbsbackuplocal /mnt/vnsnap0 Error: unable to get (default) repository root@pve:~# From the pve command line.
  18. B

    Restore single files from host backup (pxar) on cli non-interactive

    Ok, thanks that is very useful, I am getting failure when I try it, but I'll post in a new thread...
  19. B

    Restore single files from host backup (pxar) on cli non-interactive

    aha - thanks proxmox-backup-client mount host/backup-client/2020-01-29T11:29:22Z root.pxar /mnt/mountpoint Could you expand the description of this command? It is not clear to me if "host" and "backup-client" are part of the snapshot name or needs the host and the "backup-client" in the...
  20. B

    Restore single files from host backup (pxar) on cli non-interactive

    That link to the docs for the fuse mounting is not working - have you got a better one?