Search results

  1. mattlach

    [SOLVED] Add ZFS Log and Cache SSDs, what i have to buy and to do?

    Short Answer: No. Long Answer: The different VDEV's used to speed things up (Cache aka L2ARC, SLOG aka dedicated ZIL drive and more recently special allocation class) all take advantage of the underlying drives to do their thing. A cache VDEV - for instance - only make sense if it is faster...
  2. mattlach

    Additional Questions Re: "ZFS: Switch Legacy-Boot to Proxmox Boot Tool"

    Ugh. UEFI. I hate EFI booting. I don't understand why they couldn't leave well enough alone, something that has been working for decades and replace it with UEFI, some marginally functioning trash. The old way was so simple. It just worked. I have moved some systems of mine to UEFI...
  3. mattlach

    Additional Questions Re: "ZFS: Switch Legacy-Boot to Proxmox Boot Tool"

    Thank you. That is a good suggestion. I gather I'd probably have to do this from a live disk, or risk problems, but that is doable. Appreciate the suggestion! Right now I am torn between this method, and the one described on the How To for Debian on the OpenZFS page which walks you through...
  4. mattlach

    Additional Questions Re: "ZFS: Switch Legacy-Boot to Proxmox Boot Tool"

    Hi Everyone, I have a few questions regarding the need to switch to Proxmox Boot Tool for booting from ZFS. I found the need to do this in reading the release notes for PVE 7.x in preparing for my upgrade from 6.4.9. Question 1.) It says the boot will break if I run zpool upgrade on the...
  5. mattlach

    Migrate Host ZFS Boot "rpool" from MBR to UEFI booting?

    Hey all, I run a standalone Proxmox server which I have been upgrading in place for years. I am currently on 6.x, not having upgraded to 7 yet. It was a clean install on Proxmox 4.2 I believe. When i initially installed it, I set it up to boot from a ZFS mirror of two SATA SSD's using...
  6. mattlach

    proxmox on arm64

    I am not familiar with the Chinese designs, but I do recall reading a lot of reviews of the Cavium (now Marvell owned) ARMv8 ThunderX2 servers and workstations about two years ago. https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/...
  7. mattlach

    DMESG Inundated with Apparmor errors

    So, some more poking around system logs suggests that this happens every time Ubuntu runs the php sessionclean script to clean up php sessions. These two containers must be the only ones running php. Does anyone know of a way to fix this?
  8. mattlach

    DMESG Inundated with Apparmor errors

    Hey all, I'm not very good with how Apparmor works, so I was hoping someone might help me solve this one. Two of my many LXC containers, ID 110 and ID 170 are resulting in an absolute spamming of DMESG as follows: Please see this pastebin. It was too much to post in a message here. Two...
  9. mattlach

    [SOLVED] NIC Upgrade and /etc/udev/rules.d/70-persistent-net.rules

    DOH. I figured it out. I forgot I needed to run "update-initramfs -u" after makinga changes to udev config files to make everything work right. Rebooted, and now everything uses the new (hopefully static) device names... I'll leave this thread up here in case anyone else runs into the same...
  10. mattlach

    [SOLVED] NIC Upgrade and /etc/udev/rules.d/70-persistent-net.rules

    Sigh. So, to get the machine working temporarily until I have all these devices figured out, I edited /etc/network/interfaces and used the new device names, followed by a reboot. After reboot, I now have yet another old naming convention device, eth1, instead of its recent name, enp13s0f0...
  11. mattlach

    [SOLVED] NIC Upgrade and /etc/udev/rules.d/70-persistent-net.rules

    FollowUp: On a whim I decided to make a back up copy of my existing 70-persistent-net.rules file, and delete the one in the /etc/udev/rules.d location and reboot to see what happened. My theory was that without this file assigning Ethernet device names, all of the devices would instead use the...
  12. mattlach

    [SOLVED] NIC Upgrade and /etc/udev/rules.d/70-persistent-net.rules

    Hey all, I have been running a somewhat complex network setup on my server for some time: 2x Copper Gigabit Ethernet on Server Board 4x Copper Gigabit Ethernet (Intel 4x PRO/1000 NIC) 1x 10GBaseT Intel 82598EB I'm not going to go into the details of what they are used for, as it is not...
  13. mattlach

    Create a Fresh Container On Top of Existing One

    Hey all, Quick question. I have an existing very complicated container, with lots of interfaces and mounts. It's running on Ubuntu 14.04 LTS which is about to go EOL. I have tried ZFS snapshotting the existing containers rpool/subvol-110-disk-1 location and doing an in place upgrade but it...
  14. mattlach

    Network Configuration for LXC Containers

    Good to know, thank you. Maybe I am confused. Is it only the desktop version of 18.04 that defaults to netplan? I am curious. How does it determine the OS version? Does it parse the containers /etc/lsb-release?
  15. mattlach

    Network Configuration for LXC Containers

    Hey all, So, I know you configure the network interfaces for new containers in the web interface (or by editing the corresponding config file in /etc/pve/lxc) but how does it work when you actually power up the container? The reason I ask is, I have a bunch of Ubuntu 14.04 based containers...
  16. mattlach

    LXC container reboot fails - LXC becomes unusable

    Thanks for the help. I rebooted the server today, and it appears to be running normally again. Hopefully a 4.18+ PVE Kernel that fixes this issue will be made available quickly. I mean, I could easily either compile, download a mainline binary kernel or add the sources for the kernel from...
  17. mattlach

    LXC container reboot fails - LXC becomes unusable

    Hmm. I will have to check this a little later. Does a reboot temporarily solve the issue? I could probably do that overnight, and then go another few months without running into it again. My use case doesn't require restarting containers regularly. They start once when the server goes up...
  18. mattlach

    LXC container reboot fails - LXC becomes unusable

    So, I am on the following kernel: Linux proxmox 4.15.18-5-pve #1 SMP PVE 4.15.18-24 (Thu, 13 Sep 2018 09:15:10 +0200) x86_64 GNU/Linux I just shut down a container today using "pct stop 200". I went to start it back up again with "pct start 200" and this process just sits there doing nothing...
  19. mattlach

    In Place Upgrade of Ubuntu 14.04 LXC Container to 18.04?

    Hi all, Is this advisable? The reason I ask is, I'm not sure I fully understand how the PVE frontend configures the containers network and other settings. 14.04 and 16.04 use ifup/down and are thus configured in /etc/network/interfaces, but 18.04 replaces ifup/down with netplan, which is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!