Sometimes you get a bit stupid… I specified 32 in the RAM for the VM… Instead of 32768 (32Go).
Of course even booting was difficult. Message is a bit cryptic though !
I am trying to boot a compilation environment that we have and which was behaving quite well until we reached this problem…
It simply refuses to boot !
The disk is located on an NFS v.4 mount (as all our disks).
We have had no particular problem until this failure…...
Ok so here is how to proceed :
You need to create a file in the container called ".pve-ignore.hosts" file should be located in "/etc/" so "/etc/.pve-ignore.hosts"
This will let the system know NOT to rewrite on boot any modification operated in the "/etc/systemd/network/"
Then you need to...
While this seems to be interesting : https://wiki.archlinux.org/index.php/Systemd-networkd#Configuration_examples
I still would like to know what is the advised way to handle custom networking configuration in Proxmox Containers ?
This can be done at many different level including...
We are migrating container from Ubuntu 14 to Ubuntu 18.
We have a configuration with routing inside the container (in /etc/network/interfaces) :
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 23.51.52.24
netmask 255.255.255.240
gateway 23.51.52.1
auto...
Hello,
I am about to spend a quite important amount of money on a new Proxmox server which will be mainly used for big time compilation and other KVM / LXC hosting.
Hardware is SuperMicro AS-1113S-WN10RT + AMD 7501 w32 cores
https://www.supermicro.com/Aplus/system/1U/1113/AS-1113S-WN10RT.cfm...
We have a Proxmox install where everything is up and running and up to date (5.2.5)… but with ridiculously slow performances (write) on the disks. For a dual socket with 256GB of RAM and SAS 10K disks this is really bad…
System has been formated with two SSD (Mirror) for the system on...
Hello,
We are trying to move LXC containers from one host to another.
We have more than one zfs mount point on the target system :
root@proxmonster:/home/xxxx# zfs list
NAME USED AVAIL REFER MOUNTPOINT
monster 640G 2.54T 192K /monster
monster/data 639G 2.54T 208K /monster/data...
That being said, I would be interested to know why this seems to happen after the reboot and upgrade of the system from 4.3 to 4.4 ?
What is the root cause of this IGMP problem ?
Because we didn't notice such problem before the update.
Hi all,
We are having some strange issues with IGMP.
We have a three cluster node running v.4.4 (latest stable)
The Cluster is working and we have managed to have everything running.
Server indicates that : Cluster: pmox-osnet, Quorate: Yes.
Problem is that we have experienced a very...
Well, It would be simpler to just migrate all backup to this improved version.
The suggestion of adding a ticker to include the name of the VM is nice and shouldn't break anything but older backups.
From my point of view (and obviously others) - It is better to move on with something better...
Hello,
It is really very very annoying (to say the least) to have Containers and VM backed up without having the name of the container in the prefix of the backup.
There is nothing more similar than :
vzdump-lxc-125-2016_12_13-15_43_49.tar.lzo
Than the same backup that you will do couple...
Hello,
I would like to know if there is simple way to stick to 4.2 release ?
I need to upgrade one of my cluster node and would like to avoid having one of the nodes in 4.3 and all others in 4.2
Furthermore, I never upgrade my server just after a new release.
Thanks for your answer.
I have written a 12G file using dd and /dev/urandom and the size of my qcow2 image hasn't move a bit ?!
I am really perplex :
where are the data stored ?
what if my data get bigger than my storage ?
How can a virtualized file system be bigger than the image which is supposed to contain...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.