Hi Guys
I've been told in the past when configuring HDD's in a LUN for ESXi server to leave 20% free space to accommodate host swap files, etc as best practice. What are the recommendations for Proxmox (KVM) guests when configured on a LUN in shared storage. is this even a concern?
old thread I know.. but I stumbled across this http://cantivo.org/ that looks like it could be promising from a purely virtual desktop view. I tried the latest build (1.0.1 svn-550) but had a number of issues with it.. hoping for more in future releases.
PFsense 2.0.1 doesn't run well in Proxmox.. CPU usage just goes through the roof. Try the 2.1 beta. I have a 32 bit version running with virtIO drivers (balloon, nic, hdd etc) perfectly. haven't tried the 64 bit version yet myself.
The 2TB limitation applies to ESX server for local storage. I'm afraid I cant remember of the top of my head if that also applies to LUN size on external storage.
It is one of the more annoying limitations when building a stand alone ESX server. (I cant comment on Hyper-V or Citrix, not having...
just out of curiosity, I was wondering if any testing was going on in the Proxmox labs re the spice protocol? If there has, any chance of seeing a tech preview version of PVE with it? (just to play with, non supported etc...)
Had been looking forward to it since pre version 2.0 when it was in...
I had similar speeds on an IBM SystemX 3650 box. Turned out that the battery had died on the raid card (BBU) and the card failed back to "Write Through" mode.
Check that you don't have a similar issue or that it is set to "Write Back" on the raid set.
I'd take a guess that Disk Cache is turned...
I have a fully up to date 2.1 install that is hanging on backup still. this time with LZOP
It is a standalone machine backing up to an NFS share on a FreeNAS box.
EDIT: I did complete a backup without incident using GZIP
Apr 30 16:48:21 Server01 kernel: INFO: task lzop:7857 blocked for more...
Hi e100
The manual df was issued while this same backup was running and came back quickly with the correct results.
wouldnt that be affected by the same IO problems then?
During backups (when the host doesn't hang on me) I get the attached warning messages in the syslog. but the df command runs fine manually on the host via shell access. :confused:
I'm getting the same problem except mine is locked by (Backup) cant boot, backup or delete the server.
Happened after the host server hard locked during a backup of said VM (console displayed kernel time out issues - image posted in another thread)
Any help appreciated.
Well this server does seem to have very poor IO (for writes): It's a IBM system X 3650 with 1x Xeon E5520 Proc & 8Gb ram. It has 6 HDD in raid 1+0 on an IBM/LSI raid card with BBU . I built a 2k3 server in KVM on the host leaving everything as default options (but with VirtIO net/disk) during...
I take it this didn't make it into the web GUI for the final release? (unless it's somewhere I haven't looked).
Would be extremely handy to me right now :/ (using cloned CT's & VM's for testing application upgrades against production machines, need to start them up to change the static IP address's)
I realize fencing is for more than just trying to restart the failed host but could you use lack of Ping then WOL to try the restart then goto fail mode after a time out?
I just fired up nano, created a file called vpn_modprobe in /etc/init.d and entered this:
#! /bin/sh
### BEGIN INIT INFO
# provides: modprobe
# required-start:
# required-stop:
# Default-start:
# Default-stop:
# Short-description: modify server for openvpn
# Description:
### END INIT INFO...
I have a OpenVPN server running in OpenVZ on proxmox quite happily. using these instructions from OpenVPN http://openvpn.net/index.php/access-server/docs/admin-guides/186-how-to-run-access-server-on-a-vps-container.html
Only thing extra is I created a boot up script to run the modprobe...
actually I may have to eat my words on this.. :/ We had E5520's as well when they were hot off the press's (it actually forced us on to ESXi 4 as the older versions wouldn't run in on our blades with these proc's) VMware told us it would cause problems with Fault tolerance among other things...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.