Since upgrading to I've had problems. Last night several containers stopped and one refuses to restart.
$ lxc-start -n 105
lxc-start: lxc_start.c: main: 295 Executing '/sbin/init' with no configuration file may crash the host
Please help!
Thanks Dietmar
What about VPS for a local IP range? For example on this host I run some local VPS using 10.10.10.x range, the host has vmbr1 set to inet addr:10.10.10.1 Bcast:10.10.10.63 Mask:255.255.255.192. So I assume a VPS should then be set like following?
pct set 101 -net0...
I'm preparing to migrate a standalone machine from 3.4 to 4.2 and have some basic network questions. Previously I've only had to provide an IP for things to work. But now I notice that the GUI for LXC CTs contain a Gateway and Bridge option. The migration document example sets both; eg.
pct...
This may be a bit off-topic, but I'm seeking advice on what solutions work well with proxmox with regards to mass CT/VM upgrade management.
As a web-developer I've been hosting my own clients on proxmox for year now and loving it. I only run one server currently but I'm now going to fork out...
Thanks Igor, could you please clarify for me. You say it doesn't need any patches yet however I see a pending update to pve-kernel-2.6.32-43-pve which would require a reboot to be effective. Thus my confusion.
I just installed this on a server using their free 1 month trial, on an upgrade attempt I get:
root@s2:~# kcarectl -u
Unknown Kernel (debian 2.6.32-42-pve)
I've opened a ticket with them so will see what they say.
My hardware vendor got back to me and puts a damper on using ZFS. He said...
Humph... nothings ever easy. It looks like using ZFS on this hardware is out of the question. Its really hardware raid or nothing. :(
Yeh that's what I was thinking of. Any suggestions on how best to share extra space in a CT, say I wanted to give a CT on pool1 and extra 500GB of space on pool2?
Correct, with ZFS Raid 1 on pool1 and Raid-Z1 on pool2, then I have a spare bay. I'll definitely use the compression thanks for the tip. Not sure I'm confident enough to use template copying etc, have you tried that?
Thanks mir, I hadn't considered the resilvering point and it really got me re-thinking things. I've changed tack due to this and now think I'll go smaller and full SSD like this;
| D1 | D2 | D3 | D4 | D5 | D6 |
SSD 480 | SSD 480 | SSD 1TB...
I've been reading further on storage models in the wiki, it looks like ZFS is a superior way to go presuming its stable enough for production now. If I was to go ZFS from what I understand I should disable hardware raid, or at least configure it to JBOD and preferably on a server that doesn't...
I'm about to create a migrate my existing proxmox server, I sell VPS hosting. My current server has 16GB Ram and 2 x 250GB SDD (Raid10), my new server will now also have 2 x 4TB HDD (Raid10) and I'll likely increase the ram to 32GB in the future at some point. The current server stats...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.