Fresh install on squeeze fails

faceless

New Member
Jul 4, 2009
25
0
1
Hi all

I'm testing Proxmox 2.0 on a new machine, or hoping to. I've followed the instructions at http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Squeeze exactly but my install has failed.

This is on a brand new install - I've installed zsh, avahi-utils and sudo, but other than that it's out of the box.

* I've installed debian squeeze, aptitude update, aptitude safe-upgrade
* Added proxmox apt source, install pve-kernel, reboot, confirmed kernel with uname -a
* installed ntp ssh lvm2 postfix ksm-control-daemon vzprocps no problem.
* install of vzctl failed, however if I "mkdir -p /etc/vz/conf" and reinstall it succeeds.
* install of pve-cluster fails. Logs show "pmxcfs[1619]: Unable to get local IP address (pmxcfs.c:721:main)", however my /etc/hosts is correct:

127.0.0.1 localhost
127.0.1.1 mini1


and nothing else is giving me network issues. /etc/hostname is "mini1" as expected, I can ping 127.0.0.1 and 127.0.1.1, I can ssh in and out, ifconfig gives:

eth0 Link encap:Ethernet HWaddr 00:16:cb:ad:66:1e
inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:cbff:fead:661e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3132 errors:0 dropped:0 overruns:0 frame:0
TX packets:660 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:660219 (644.7 KiB) TX bytes:117217 (114.4 KiB)
Interrupt:16


lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1664 (1.6 KiB) TX bytes:1664 (1.6 KiB)


venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


and my routing table is:

Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0



As you can see I do have ipv6 enabled - I don't know if this is an issue but I hope not because I can't seem to disable it: the instructions at http://wiki.debian.org/DebianIPv6 don't work (reboot gives "net.ipv6.conf.all.disable_ipv6 is unknown key), and adding "alias net-pf-10 off" and "alias ipv6 off" to /etc/modprobe.d/aliases has no effect.


I'm pretty much out of ideas now. Any thoughts?
 
Thanks FinnTux

While that does fix the immediate problem, it's not ideal as it essentially forces me to run with a static IP address. It's also not really the "debian way" - see 5.1.2. The hostname resolution. Obviously this is what beta testing is for so hopefully the Proxmox team will find a better way to establish the local external IP (hint hint).

While we're on testing: pve-cluster has a dependency on libdigest-hmac-perl, which isn't declared in the package. Not obvious because you'd normally install pve-cluster as a dependency of proxmox-ve which does declare this dependency.

Still having install issues but will have to leave it 'til monday before I follow up with those.
 
Thanks FinnTux

While that does fix the immediate problem, it's not ideal as it essentially forces me to run with a static IP address...

yes, you need a static ip.
 
While we're on testing: pve-cluster has a dependency on libdigest-hmac-perl, which isn't declared in the package. Not obvious because you'd normally install pve-cluster as a dependency of proxmox-ve which does declare this dependency.

that for reporting that - will be fixed in the next release.
 
Ah. Well, if corosync requires it then I guess I'm out of luck.

As for the "debian way", perhaps I'm overstating this. A fresh install doesn't leave /etc/hosts like that and I've never had to do it before. But hey ho, whatever. Might be worth noting this step, and the requirement for a static IP, in the install documentation.

Incidentally I've plugged this to the proxmox team years ago and I'll do it again now 2.0 is firming up - we use zeroconf (avahi on Linux) to manage our network of dynamic-ip boxes - it's fantastic. Although you're using static IP addresses it still might be worth a look as it's primary purpose is service announcement/discovery - this is how iTunes can tell you who else is sharing their iTunes library on the network, for example. If you're setting up clusters of proxmox boxes it would allow them to find eachother without any configuration, which would be quite neat, and the protocol and API are fairly lightweight. Just putting it out there :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!