Short Answer:
No.
Long Answer:
The different VDEV's used to speed things up (Cache aka L2ARC, SLOG aka dedicated ZIL drive and more recently special allocation class) all take advantage of the underlying drives to do their thing.
A cache VDEV - for instance - only make sense if it is faster...
Ugh. UEFI.
I hate EFI booting.
I don't understand why they couldn't leave well enough alone, something that has been working for decades and replace it with UEFI, some marginally functioning trash. The old way was so simple. It just worked.
I have moved some systems of mine to UEFI...
Thank you. That is a good suggestion.
I gather I'd probably have to do this from a live disk, or risk problems, but that is doable.
Appreciate the suggestion!
Right now I am torn between this method, and the one described on the How To for Debian on the OpenZFS page which walks you through...
Hi Everyone,
I have a few questions regarding the need to switch to Proxmox Boot Tool for booting from ZFS.
I found the need to do this in reading the release notes for PVE 7.x in preparing for my upgrade from 6.4.9.
Question 1.)
It says the boot will break if I run zpool upgrade on the...
Hey all,
I run a standalone Proxmox server which I have been upgrading in place for years. I am currently on 6.x, not having upgraded to 7 yet.
It was a clean install on Proxmox 4.2 I believe.
When i initially installed it, I set it up to boot from a ZFS mirror of two SATA SSD's using...
I am not familiar with the Chinese designs, but I do recall reading a lot of reviews of the Cavium (now Marvell owned) ARMv8 ThunderX2 servers and workstations about two years ago.
https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/...
So, some more poking around system logs suggests that this happens every time Ubuntu runs the php sessionclean script to clean up php sessions. These two containers must be the only ones running php.
Does anyone know of a way to fix this?
Hey all,
I'm not very good with how Apparmor works, so I was hoping someone might help me solve this one.
Two of my many LXC containers, ID 110 and ID 170 are resulting in an absolute spamming of DMESG as follows:
Please see this pastebin. It was too much to post in a message here.
Two...
DOH.
I figured it out.
I forgot I needed to run "update-initramfs -u" after makinga changes to udev config files to make everything work right.
Rebooted, and now everything uses the new (hopefully static) device names...
I'll leave this thread up here in case anyone else runs into the same...
Sigh.
So, to get the machine working temporarily until I have all these devices figured out, I edited /etc/network/interfaces and used the new device names, followed by a reboot.
After reboot, I now have yet another old naming convention device, eth1, instead of its recent name, enp13s0f0...
FollowUp:
On a whim I decided to make a back up copy of my existing 70-persistent-net.rules file, and delete the one in the /etc/udev/rules.d location and reboot to see what happened.
My theory was that without this file assigning Ethernet device names, all of the devices would instead use the...
Hey all,
I have been running a somewhat complex network setup on my server for some time:
2x Copper Gigabit Ethernet on Server Board
4x Copper Gigabit Ethernet (Intel 4x PRO/1000 NIC)
1x 10GBaseT Intel 82598EB
I'm not going to go into the details of what they are used for, as it is not...
Hey all,
Quick question.
I have an existing very complicated container, with lots of interfaces and mounts. It's running on Ubuntu 14.04 LTS which is about to go EOL.
I have tried ZFS snapshotting the existing containers rpool/subvol-110-disk-1 location and doing an in place upgrade but it...
Good to know, thank you.
Maybe I am confused. Is it only the desktop version of 18.04 that defaults to netplan?
I am curious. How does it determine the OS version? Does it parse the containers /etc/lsb-release?
Hey all,
So, I know you configure the network interfaces for new containers in the web interface (or by editing the corresponding config file in /etc/pve/lxc) but how does it work when you actually power up the container?
The reason I ask is, I have a bunch of Ubuntu 14.04 based containers...
Thanks for the help.
I rebooted the server today, and it appears to be running normally again.
Hopefully a 4.18+ PVE Kernel that fixes this issue will be made available quickly.
I mean, I could easily either compile, download a mainline binary kernel or add the sources for the kernel from...
Hmm.
I will have to check this a little later.
Does a reboot temporarily solve the issue? I could probably do that overnight, and then go another few months without running into it again.
My use case doesn't require restarting containers regularly. They start once when the server goes up...
So,
I am on the following kernel:
Linux proxmox 4.15.18-5-pve #1 SMP PVE 4.15.18-24 (Thu, 13 Sep 2018 09:15:10 +0200) x86_64 GNU/Linux
I just shut down a container today using "pct stop 200".
I went to start it back up again with "pct start 200" and this process just sits there doing nothing...
Hi all,
Is this advisable?
The reason I ask is, I'm not sure I fully understand how the PVE frontend configures the containers network and other settings.
14.04 and 16.04 use ifup/down and are thus configured in /etc/network/interfaces, but 18.04 replaces ifup/down with netplan, which is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.