This is the issue, none of the processes are 'hanging', they eventually operate and run but very slowly - stracing reveals nothing other than normal syscalls.
Even stracing the kvm process itself reveals nothing unusual..
Yes but I should still see the openssh header via telnet, regardless of host lookup.
This issue isn't just affecting SSH connections, it's any service that's running on the system.
Even launching 'top' takes a few seconds.
Hi all,
I'm having a really weird issue in that a KVM hypervisor running CentOS 6.7 starts up fine and then anywhere between 3 - 12 hours later, anything that requires the opening of a new socket or file descriptor becomes very slow.
While it is slow, telnetting to port 22 from an external...
The init script responsible for settings /etc/issue is /etc/init.d/pvebanner, a small perl script that updates the file on every boot.
sudo update-rc.d pvebanner remove
will disable the script and allow you to set /etc/issue manually (we use puppet).
Hi all.
I'm using ZenOSS to monitor our proxmox servers.
During the modelling phase, ZenOSS will grab an initial processlist and match against a table of known processes that should be running.
Trouble is, it's picking up the processes running inside the various containers.
When a process...
If you're doing this command from the proxmox server that you just created the cluster with, then it won't work.
Perform this command with on the nodes that are to be joined to the cluster instead.
Re: Container console black with white cursor on Proxmox 2.0.1 - Can't interact with
just double checked the version I'm on and it's the latest:
root@pvz1:~# pveversion
pve-manager/2.2/c1614c8c
Hi Guys,
Hope you can help me.
I've used Proxmox for a while now and I'm used to being able to connect to a containers pty via the java applet VNC console.
Recently though, I've installed a fresh 2.0.1 proxmox and spooled up a couple of containers and all I can see is this:
Spooling up...
Not on this subnet no. The HA file servers are using eth0,239.0.0.1
I configured /etc/pve/cluster.conf to use 239.0.0.2
I verified that the new multicast address was being used by restarting and then doing netstat -lpnu and sure enough, corosync was listening on 239.0.0.2
However this...
Ok, some more progress.
I set up a couple of old Dell 1850's in a DRBD / Heartbeat v1 / NFS configuration just like we do at the datacenter and they worked fine.
I then booted up a single Proxmox 2.1 server and attached it to the same switch on the same vlan / subnet and watched the HA log...
Interestingly, both of the file servers can ping each other constantly while the failure is occurring - this would indicate it's a problem with multicast traffic on the subnet being modified or altered by the proxmox boxes when they are plugged into the network, I'm sure it's something to do...
Hi Guys,
I have a very strange issue here that hopefully some of you know the answer to.
I have 2 x proxmox 2.1 servers.
They have two interfaces each, eth0 is bridged to vmbr0 and eth1 is just a peer to peer link to the other proxmox server via crossover cable.
Server 1:
Hostname vz1...
This feels very "Windows 95" esque.
Why is there a requirement to reboot your server if you make any changes to the network interfaces?
I've been moving the interfaces.new into the place of interfaces and ifup'ing the new or changed interface and it works fine.
Any reason why it's been...
I have tried a dual-primary configuration with OpenVZ.
We used the OCFS2 filesystem to have both primaries be able to mount the DRBD resource at the same time.
While the drives mounted correctly and there was no data corruption, we found that our I/O Waits were constantly above 50%, this...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.