Since I have not been using lxc since the post above - almost a year ago, I can't really help you with that.
I would ask you though to consider what is the reason _why_ you wish to experiment like this on your proxmox host node?
The multiple major advantages of running your container...
I also would recommend to you an intel i350v2 or i354
If you care to consider more specifically , then see here:
any card suppored by the igb driver, because it will always fully support igb_vfs too.
In the interests of inclusivity, in theory one...
Skopeo is a useful standalone utility for inspecting and moving container images.
Skopeo is built and available packaged from the links that I posted above.
I used it for years with docker before Podman was even released.
If you have not...
So I made the choice to move away from using LXC for hosting containerised workloads.
I did consider LXD at length, but it is not integrated into PVE and does not 'just work' with the large and growing variety of 'dockerized' workloads.
For me, the advantages from managing your...
Yes, as it happens. I have been doing similar for years.
I ran openwrt as a raw image in a VM for a while, then migrated to hosting a container with Podman.
Security is obviously better within a VM, so I host the container runtime in a VM.
The issue of upgrading, has always been well, an issue...
Fortunately there exists a production ready alternative to the Docker problem.
It is called Podman.
You can run 'Docker' containers with minimal-to-no changes.
When you get used to it, it is a far superior platform and technology suite to 'Docker' TM
and you will...
Sorry for the 13 month later reply Hyacin but I found this last infomation interesting and don't immediately see how you share the loopback IP into many containers.
Obviously I can create a 127.x.x.x/32 IP on 'lo' or I might to prefer to create a 'dummy' interface, as I do already to hold...
Doing exactly as you suggested has allowed me to progress into bootup of the installer. :)
Removing 'silent' and 'splashscreen' has enabled me to watch the boot proceeding,
before arriving at
"switching to radeondrmfb from VESA VGA".
No need for nomodeset.
I think your suspicion...
Did you make any progress with this error?
I also have a Gigabyte AM2+ board on which I am attempting to run Proxmox installer 6.2.
and am seeing a similar issue:
"hdaudio hdaudioc0d0: Unable to bind to the codec"
And yes, I know that I can install Debian then upgrade it, but I didn't want...
Once again Thomas, thank you so much for your attention, and advice here. :)
I aim to understand everything that happens to these 2 local 'pets', and hope for no surprises with the remote 'cattle' ;-)
Excellent! I would likely never have known that!
I shall consider this tonight and probably...
Thanks for your attention and swift helpful replies! :)
Yeah, so the history of these machines is long and uncertain,
since they serve as lab guinea pigs
IIRC Both I installed from Debian 6 nonfree isos and have been manualy updating them thoughout the years since the PVE3.x times...
thankyou Thomas for that informative and helpful explanation.
Having fixed the failure to load module f71882fg
by adding "acpi_enforce_resources=lax" to 'GRUB_CMDLINE_LINUX_DEFAULT'
I have sensors data and shall desist from attempting to fiddle with with firmware or AMD...
I have two test servers that I recently upgraded from Debian jessie to stretch and PVE 5 latest, following the instructions in the wiki.
As it happens, both machines are AMD platfoms with socket am3+ gigabyte 970 motherboards and discrete Radeon graphics, although all different...
I still see this problem on PVE4 updated
(which was originally manually installed onto Debian8 minimal)
auser@debian8:~$ pveversion -v
proxmox-ve: 4.4-87 (running kernel: 4.4.59-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
Thankyou Thanyou Thankyou @fraksen
for making such a clear and useful post. :)
That is good advise to use A bastion server and ssh password disable.
I am trying out Nethserver at the moment.
If only the forum had a Tuturial section: this would be an ideal post for all newbies to learn from.
Thankyou for that rapid and thoughtful answer fraksken. :)
That would be absolutely superb if you could make a guide! :)
I was going to go with IPFire solution because it should 'just work' ... been using it on my LAN and PVE test boxes for years, but it didn't work on online.net host. :-(...
I seem to have a similar problem.
I have been using IPFire as firewall for virtual ethernet on a test server for a long time.
Now I try to replicate the setup on a rented server from online.net following along with their PVE KVM VM installation instructions...
TLDR: this is NOT about not being able to access PXVE from external SSH client.
This is asking how to stop Firewall from blocking accesses to a local virtual ethernet device, which it happens I am accessing _from_ a local tun device which is the exit of an ssh point-to-point encrypted IP tunnel...
I have previous experience succeeding upgrading Debian to PXVE4 from CLI
Today when attempting this on a remote host I find conflict with firmware.
I already saw: http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
'apt-get install proxmox-ve ntp ssh postfix...
So I am not an expert, but one possibility does occur to me.
If you write a general script that handles configuration management ( I would prefer the Ansible way to collect answers, then decide on actions) and distribute files using ssh/scp/rsync.
You do not then have to perform a sequential...