I didn't check that, so I don't know.
Okay, I will remove /boot/efi from fstab. I think it makes sense to not have it there, while multiple EFI partitions exist (on each drive).
Are you sure that I shouldn't have /boot/efi in fstab? Will it boot after I remove it? I don't know in detail, how GRUB/EFI/boot sequence really works, what are the steps, what loads after what. Can I clone efi partition to another disk and will it boot when disk fails? Probably...
Yeah, I agree that I want running system and not to shutdown all VMs to wait for boot repair.
That's sad because I've added third drive to mirror, but according to this, if boot partition disappears again, it will fail again :( Don't you have any idea, how can I disable this check? Or to have...
There was a failure, I've changed the drive for another (different model) and it failed after one month - at the same drive bay, so I think it's not the drive, but controller/SATA cable. It's Supermicro case and indicater on drive caddy was steady green. Now I moved it to another slot/controller...
Hello, I have a 2-disk mirror ZFS pool for Proxmox (as root partition). When one of these 2 disks failed, my system has shut down all VMs and stayed in some emergency mode. Is it possible to setup, that on zpool disk fail, system stays running? Thank you.
impact: Fault tolerance of the pool may...
Thank you for your comment, I understand the problem as you described it. However I think it's not so hard to sum CPU usage for processes run in LXC container and scale the result according assigned CPU cores count.
Hello, did you end up buying the server with D7-P5520? Can you share the settings for zpool, if you're using ZFS? I think we should use ashift=13 or 12 and also change NVMe formatting, but not sure to what profile (there are 5 of them).
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt...
Yes, all VMs and containers must be shut down, basically whole system goes "off" and it switches kernel instead of computer restart.
Very long booting is on servers, when BIOS checks everything and kexec really helps there to skip this.
I have this same problem when upgrading my server. But this BPF spam comes from old kernel - once I've rebooted, it's gone and fine.
My issue was that I was using legacy boot - before reboot I had to proceed this and install UEFI with...
I have the same problem - providing config files with qm set 138 --cicustom "network=local:snippets/138-network.yml,user=local:snippets/138-user.yml" and Debian gets stucked on:
[ 4.122675] cloud-init[209]: SHA256:vnpWXRTWMVUM59cjPChKnfxp0g3YVj+pcrez0Fik7tk root@debian
[ 4.124201]...
Is there any progress on this issue, please? Is someone using kexec successfully?
Thanks.
EDIT: I've tried this and it works - https://patrakov.blogspot.com/2019/06/kexec-on-modern-distributions.html
EDIT: File has to have .yml extension, not .yaml. It's working fine with .yml
I have the same problem that Ubuntu 20.04 dosn't apply my custom cicustom user YAML configuration. I've already set local storage content to snippets.
dir: local
path /var/lib/vz
content...
I've already tried using cache file a few days ago - didn't help. There was some problem with disappearing cache file, so I've also tried some hack to backup&restore cache file - not worked.
did you change any ZFS related systemd service files?
I'm not aware of it. But I have to say that...
This is weird:
root@supernas:/tank/container# zfs mount tank/container/subvol-109-disk-0
cannot mount 'tank/container/subvol-109-disk-0': filesystem already mounted
root@supernas:/tank/container# zfs unmount tank/container/subvol-109-disk-0
umount: /tank/container/subvol-109-disk-0: not...
Hello, I've done all suggested points, memtest passed without errors, but still the same behaviour - ZFS folders are not mounting on start.
Is there anything to do, please?
Current syslog lines:
Apr 20 07:12:27 supernas kernel: [ 41.070493] audit: type=1400 audit(1587359547.386:19)...
During LXC container start, there is an error in host syslog:
kernel: [317850.933850] audit: type=1400 audit(1587296481.309:169): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/proc/sys/kernel/random/boot_id" pid=26103...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.