Great idea!
+1 for iPhone
+1 for Android
Version 1: start/stop/restart VM + general statistics
Version 1.5: remote control + logs (VM and host)
Version 2: VM configuration (create/delete/modify)
Version 2.5: template/iso
Version 3: host configuration options
Version 3.5: storage/backup
Version...
I haven't tested the 2.0 (no spare machine) nor the second workaround (2 CPU per VM with Java), only the downgrade of kernel (not a real workaround though). I already use a vswap enabled config with 1.8 and 1.9 but the problem with JVM also exists with this configuration and kernel -6. Will...
Do we have any progress with this bug? A use case has been defined; do you need another one (I can describe Zimbra installation that would raise the problem, but it's a little bit more complicated than pezi's use case)?
Tom, the problem described here is not related to failcount, it's something other.
Please see my first post above.
Using a vswap enabled configuration (based on samples in /etc/vz/conf), I achieve to get vswap and no failcount at all (of course, as most is set as unlimited), but with kernel...
Confirmed for me too.
Zimbra runs flawlessly and rock solid for 60 hours with only the kernel downgraded to 2.6.32-4; all the rest is still Proxmox 1.9.
pve-manager: 1.9-24 (pve-manager/1.9/6542)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.9-43
pve-kernel-2.6.32-4-pve: 2.6.32-33...
+1 for me.
I use Zimbra in a Lucid OpenVZ container (configuration based on ve-vswap-1024m.conf-sample in /etc/vz/conf so most parameters but PHYSPAGES, SWAPPAGES, KMEMSIZE and LOCKEDPAGES are on unlimited), all failcnt are on 0 but after "some time" (5 minutes, 6 hours, 15 hours), Zimbra stops...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.