The upgrade path was not too smooth there.. If you need details about all this, you can check https://pve.proxmox.com/wiki/Host_Bootloader on the wiki, btw.
well, proxmox-boot-tool here thinks it's driving systemd-boot, and so does the UEFI setup. You could re-enable it (install the tooling, then make sure it boots the proper kernel). But I'd suggest going back with GRUB if you don't need...
I had to force unmount it, now it will not mount. Powered it off and back on no longer appears when I do lsblk so it is in rough shape. Thanks for all your help, going to mark this as solved.
yeah, re-init with grub using proxmox-boot-tool, as I suggested, that should do it. Or switch to UEFI entry 0007 (cue the James Bond theme..), possibly with efibootmgr, but more likely in your UEFI/BIOS setup.
OK, what about efibootmgr ?
It seems that proxmox-boot-tool is not set to use grub.. so I'd say either format / init the partition (proxmox-boot-tool init /dev/disk/by-uuid/AED9-4562 grub probably), so that it gets back on grub.
Could you check the permission after that in /mnt/pve/NAS_NFS/dump/ ?
It could be telling that it's not permitted because it's full as well, df -h /mnt/pve/NAS_NFS/ is alright?
Many DHCP servers allow you to specify custom codes (250 here) for a given MAC, without scripting.. you can also cheat and use info from the request in apache or nginx to match a file on the server..
Well, after all those years of mounting images manually, at least I discover libguestfs-tools.. that is something...
I don't really like using debian cloud img for a few opinionated choices there.. top of my head.. no LVM / all in one partition...
I'd say your drive is not in great shape.. check dmesg for info, and try unmounting it / remounting it. But either it was unplugged / badly treated, either it's dying.
So, I'd just maproot (user) to root (IIRC mapall doesn't apply to root in TrueNAS, and I don't think it's a good idea to map to root anyway..), and fix the permissions (chown -R root: /mnt/pve/NAS_NFS once re-mounted). You don't need non-root...
Yeah, it's not that bad, it's indeed the route that drives it to think it has a /8 network. Code is not wrong, but it fails to select the correct (more precise) route, but that may be by design. In src/PMG/Utils.pm it does a proper route check...
How is your interface configured still?
EDIT:
I'm guessing, properly, as in, as a /16 or /24, I can reproduce it instantly whatever the size of the network configured, pmg guesses "class A! What else could it be!" when using 10.x subnets. I'd say...
I need to check.. and it may be a "bug" in the way it determines it's own net.. not totally wrong because 10/8 certainly includes 10.10/16 or what not, but not exactly clever. I don't remember how this is computed, but I think that is the issue..
Hi,
CIFS should have worked, but well, if you can set up a NFS export, that's fine as well. Can you show us the storage.cfg part about the NFS mount? Maybe also findmnt -u /mnt/pve/NAS_NFS ?
Can you check the permissions on /mnt/pve/NAS_NFS/ and...
How are both datastores set up? It seems either the extdrv1 is not mounted, or maybe pbs can't write to it.. but you'd have had an error before the sync job...
what does findmnt -u/mnt/extdrv1 tell you? what about ls -al /mnt/extdrive1/ ?
Das sieht mir eher so aus, als ob deine EFI-Partition voll ist (siehe [..] espmounts/6C4C-8D6F [..]).
Gib mal bitte proxmox-boot-tool status
Edit:
Probier bitte auch mal
mkdir -p /mnt/esp;mount /dev/disk/by-uuid/6C4C-8D6F /mnt/esp;df -h...
Hi!
There is no "SRM", at least provided by Proxmox. But you get internal backups, or you can use PBS. But all that is more about replacing the Veeam part, than the SRM. However, by using PBS, and adding a layer of backup for the hosts...